Sunday, June 7, 2015

Java 8 Features

Lambda expressions allow to pass in a block of code, a function without a name. The -> separates the parameter from the body of the lambda expression. The parameter of lambda expression does not have a type as the type is inferred from the context of the variable. The lambda method parameters are statically typed. Some different ways of writing lambda expressions areas below:

// 1) empty pair of parentheses, (), can be used to signify that there are no arguments.

Runnable noArguments = () -> System.out.println("Hello World");
// 2) when there is only one argument to the lambda expression we can leave out the parentheses around the arguments.

ActionListener oneArgument = event -> System.out.println("button clicked");
// 3) full block of code, bookended by curly braces ({})

Runnable multiStatement = () -> {
   System.out.print("Hello");
   System.out.println(" World");
};
// 4) This creates a function that adds together two numbers, were the variable called add 
//    isn’t the result of adding up two numbers; it is code that adds together two numbers.

BinaryOperator<Long> add = (x, y) -> x + y;
// 5) providing explicit type to arguments requires to surround the arguments to the lambda expression with parentheses. 
//    The parentheses are also necessary if you’ve got multiple arguments.

BinaryOperator<Long> addExplicit = (Long x, Long y) -> x + y;

The target type of a lambda expression is the type of the context in which the lambda expression appears—for example, a local variable that it’s assigned to or a method parameter that it gets passed into.

Lambda expressions only use final variables since they capture values, not variables. Although we are not required to declare the variable(s) as final, we cannot use them as nonfinal variable(s) if they are to be used in lambda expressions. If you assign to the variable multiple times and then try to use it in a lambda expression, you’ll get a compile error.

Lambda expressions are statically typed, and these types of lambda expressions are called functional interfaces. A functional interface is an  interface with a single abstract method that is used as the type of a lambda expression.

Table 2-1. Important functional interfaces in Java
Interface name Arguments Returns Example
Predicate<T> T boolean Has this album been released yet?
Consumer<T> T void Printing out a value
Function<T, R> T R Get the name from an Artist object
Supplier<T> None T A factory method
UnaryOperator<T> T T Logical not (!)
BinaryOperator<T> (T, T) T Multiplying two numbers (*)

Java 8 allows you to leave out the types for whole parameters of lambda expressions. Java compiler looks for information close to the lambda expression and uses this information to figure out what the correct type should be. It’s still type checked and provides all the safety that you’re used to, but you don’t have to state the types explicitly. This is what we mean by type inference.

A Predicate is a functional interface that checks whether something is true or false. It is also a lambda expression that returns a value, unlike the previous ActionListener examples.
Predicate<Integer> atLeast5 = x -> x > 5;

Here the return value of the lambda expression is the value its body evaluates to.

BinaryOperator is a functional interface that takes two arguments and returns a value, all of which are of same type. It takes only a generic argument. If no generic argument is specified the code doesn't compile.
BinaryOperator<Long> addLongs = (x, y) -> x + y;

Some of the Key Points on Lambda Expressions:
  • A lambda expression is a method without a name that is used to pass around behavior as if it were data.
  • Lambda expressions look like this: BinaryOperator<Integer> add = (x, y) ? x + y.
  • A functional interface is an interface with a single abstract method that is used as the type of a lambda expression.

Streams

In for loop for collections, the iteration proceeds by creating a new Iterator object and then explicitly calling the hasNext and next methods on the Iterator. Hence its too hard to abstract away the different behavioral operations and is inherently serial in nature.

Streams allow to write collections-processing code at a higher level of abstraction. A Stream is a tool for building up complex operations on collections using a functional approach. The Stream interface contains a series of functions each of which corresponds to a common operation that we might perform on a Collection. The call to stream() returns a Stream which is an equivalent interface with the Iterator in the internal iteration of collection.
long count = allArtists.stream().filter(artist -> artist.isFrom("London")).count();

The operations using Streams API does not change the contents of collection but declared the contents of the Stream. The Stream object returned isn’t a new collection—it’s a recipe for creating a new collection. The filter method of the Stream above keeps only those objects that pass a test by returning either true or false. The call to filter builds up a Stream recipe, but there’s nothing to force this recipe to be used. Methods such as filter that build up the Stream recipe but don’t force a new value to be generated at the end are referred to as lazy. Methods such as count that generate a final value out of the Stream sequence are called eager. If the operation gives back a Stream, it’s lazy; if it gives back another value or void, then it’s eager. The values in the Stream that are operated on are derived from the initial values and the recipe produced by the sequence of Stream calls.

The collect(toList()) is an eager operation that generates a list from the values in a Stream.
List<String> collected = Stream.of("a", "b", "c").collect(Collectors.toList());

The map operation allows to apply a function (say converting a value of one type into another) to a stream of values, producing another stream of the new values. It is an instance of Function.
List<String> collected = Stream.of("a", "b", "hello").map(string -> string.toUpperCase()).collect(toList());

The filter method on the stream allows to check each element of the collection. The Stream after the filter has the elements of the Stream
beforehand, which evaluated to true. It is an instance of Predicate interface.

The flatMap method allows to replace a value with a Stream and concatenates all the streams together. It is a variant of map operation which produces a new Stream object as the replacement. It is associated with functional interface but its return type is restricted to streams and not any value.
// Takes Stream of lists of numbers and returns all the numbers from in the sequences.

List<Integer> together = Stream.of(asList(1, 2), asList(3, 4)).flatMap(numbers -> numbers.stream()).collect(toList());

The max and min methods of stream API finds the maximum or minimum element in the streams. The Comparator is passed to determine the ordering of the elements. The comparing method in java 8 allows to build a Comparator using keys. The getter function for the value allows to get the same element out of both elements being compared. When the max method is called on an empty Stream it returns an Optional value, which represents a value that may exist, or may not.
Track shortestTrack = tracks.stream().min(Comparator.comparing(track -> track.getLength())).get();

The reduce operation is used to generate a single result from a collection of values. It takes the initial count and the lambda expression. The count always starts with 0 which is the count of an empty Stream. The type of the reducer is a BinaryOperator.
int count = Stream.of(1, 2, 3).reduce(0, (acc, element) -> acc + element);

Higher-order functions: A higher-order function is a function that either takes another function as an argument or returns a function as its result. If a functional interface is used as a parameter or return type, then we have a higher-order function. Nearly all the functions that we’ve encountered on the Stream interface are higher-order functions. Comparator is also a functional interface as it has only a single abstract method.

Streams describes the operations on data by saying what transformation is made rather than how the transformation occurs. They help to get towards an idea of side effect–free function. Functions with no side effects don’t change the state of anything else in the program or the outside world. Lambda expressions allow to capture values rather than capturing variables (as variables are final), hence promoting to write code free from side effects. The only exception to this is the forEach method, which is a terminal operation.

Java 8 introduces default methods and static methods on interfaces enabling to have methods with bodies containing code.

Boxed types which wrap up the primitive types are objects and have a memory overhead. An primitive int takes 4 bytes of memory, an boxed type Integer takes 16 bytes. Further an Integer[] take up nearly six times more memory than an int[] of the same size. There is also a computational overhead when converting from a primitive type to a boxed type (boxing) and vice versa (unboxing). The streams library hence differentiates between the primitive and boxed versions of some library functions e.g. mapToLong higher-order function and ToLongFunction.
        If the return type is a primitive, the interface is prefixed with To and the primitive type, as in ToLongFunction. If the argument type is a primitive type, the name prefix is just the type name, as in LongFunction. If the higher-order function uses a primitive type, it is suffixed with To and the primitive type, as in mapToLong. These methods such as mapToLong return specialized streams such as IntStream, DoubleStream and LongStream instead of Stream. The min, max, average, sum along with summaryStatistics methods are all available on all three primitive specialized Stream variants.

While overloading the methods, java infers the type of lambda to be the most specific functional interface when calling these methods.
  • If there is a single possible target type, the lambda expression infers the type from the corresponding argument on the functional interface.
  • If there are several possible target types, the most specific type is inferred.
  • If there are several possible target types and there is no most specific type, you must manually provide a type.

@FunctionalInterface is an annotation that should be applied to any interface that is intended to be used as a functional interface. The new interfaces provide Stream interoperability and are really there to bundle up blocks of code as data. Interfaces such as java.lang.Comparable and java.io.Closeable have only a single method (which depends on the object's internal state) but aren’t normally meant to be implemented by lambda expressions. The @FunctionalInterface annotation can be applied to such interfaces enabling them to be used with lambda expressions.

Default Methods allow to let the implementing classes of the interface use their implementation e.g. the stream method of collection or forEach method on Iterable. Unlike classes, interfaces don’t have instance fields, so default methods can modify their child classes only by calling methods on them. This helps to avoid making assumptions about the implementation of their children. The default method is a virtual method opposite of a static method. The class or concrete method override always takes precedence over the default method. If a child class extends a parent class which in turn implemented an interface with a default method. Then the child class also implemented another interface with the default method of the same signature. But the default method from the parent class takes precedence over the default method from the interface.

When a class implements multiple interface with the same default method it results in the compile error. With the new enhanced super syntax i.e. using the InterfaceName.super variant it’s possible to specify a method from an inherited interface.

Below are the rules for multiple inheritance for default methods.
  1. Any class wins over any interface. So if there’s a method with a body, or an abstract declaration, in the superclass chain, we can ignore the interfaces completely.
  2. Subtype wins over supertype. If we have a situation in which two interfaces are competing to provide a default method and one interface extends the other, the subclass wins.
  3. No rule 3. If the previous two rules don’t give us the answer, the subclass must either implement the method or declare it abstract.
The default methods avoids multiple inheritance of state but rather allows inheritance of blocks of code. Java 8 enables static methods on an interface e.g the of() method of the Stream interface.

Optional is a new core library data type that is designed to provide a better alternative to null. null is often used to represent the absence of a value, and this is the use case that Optional is replacing. The problem with using null in order to represent absence is the dreaded NullPointerException. Optional encourages the coder to make appropriate checks as to whether a variable is null in order to avoid bugs. Second, it documents values that are expected to be absent in a class’s API. The factory method is used to create an Optional instance from a value. The empty factory method of Optional is used to represent Optional as an absent value. Also a a nullable value can be converted into an Optional using the ofNullable method.

A common idiom you may have noticed is the creation of a lambda expression that calls a method on its parameter.
artist -> artist.getName()
Such common idiom can be written using method reference as below:
Artist::getName
The standard form of method reference is Classname::methodName. There are no method brackets, since we are not actually calling the method but providing the equivalent of a lambda expression that can be called in order to call the method. Constructors can also be called using the same abbreviated syntax as below:
(name, nationality) -> new Artist(name, nationality) // original
Artist::new // creating an Artist object
String[]::new // creating String array

Method references automatically support multiple parameters, as long as you have the right functional interface.

Element Ordering
The encounter order is defined depends on both the source of the data and the operations performed on the Stream. When you create a Stream from a collection with a defined order e.g. List, the Stream has a defined encounter order. When we try to map values and there’s a defined encounter order, then that encounter order will be preserved. When there’s no encounter order on the input Stream, there’s no encounter order on the output Stream. Most operations, such as filter, map, and reduce, can operate very efficiently on ordered streams. Although ordering can be removed by using stream's unordered method. The forEachOrdered method provides an ordering guarantee compared to forEach method especially while using parallel streams.

The collector is a general-purpose construct for producing complex values from streams. These can be used with any Stream by passing them into the collect method. The collectors can be statically imported from the java.util.stream.Collectors class.
The toList collector produces java.util.List instances, while the toSet and toCollection collector produces instances of Set and Collection.
List<Integer> numbers = asList(1, 2, 3, 4);
List<Integer> stillOrdered = numbers.stream().map(x -> x + 1).collect(toList());
By calling toList or toSet we don’t get to specify the concrete implementation of the List or Set and the implementation is picked by stream library.

The collector e.g. toCollection can take a function to build a specified type of collection as its argument as below:
stream.collect(toCollection(TreeSet::new));

Also single value can be collected using collector such as maxBy and minBy collectors.
public Optional<Artist> biggestGroup(Stream<Artist> artists) {
   // defines a lambda expression that can map an artist to the number of members
   Function<Artist,Long> getCount = artist -> artist.getMembers().count();
   return artists.collect(maxBy(comparing(getCount)));
}

The averagingInt method takes a lambda expression in order to convert each element in the Stream into an int before averaging the values as below.
public double averageNumberOfTracks(List<Album> albums) {
    return albums.stream().collect(averagingInt(album -> album.getTrackList().size()));
}

A collector partitioningBy takes a stream and partitions its contents into two groups. It uses a Predicate to determine whether an element should be part of the true group or the false group and returns a Map from Boolean to a List of values.
public Map<Boolean, List<Artist>> bandsAndSolo(Stream<Artist> artists) {
 // The method reference Artist::isSolo can be also written as artist -> artist.isSolo()
 return artists.collect(partitioningBy(Artist::isSolo));
}

The groupingBy collector takes a classifier function in order to partition the data similar to the partitioningBy collector which took a Predicate to split it up into true and false values. Below is an example which groups a Stream of albums by the name of their main musician. The groupingBy form below divides elements into buckets. Each bucket gets associated with the key provided by the classifier function, e.g. here its getMainMusician. The groupingBy operation then uses the downstream collector to collect each bucket and makes a map of the results.
public Map<Artist, Long> numberOfAlbums(Stream<Album> albums) {
    return albums.collect(groupingBy(album -> album.getMainMusician(),
    counting()));
}

The mapping collector allows to perform a map-like operation over the collector’s container. The mapping collector needs the collection to store the results, which can be done using the toList collector.
public Map<Artist, List<String>> nameOfAlbums(Stream<Album> albums) {
    return albums.collect(groupingBy(Album::getMainMusician,
    mapping(Album::getName, toList())));
}

In both the above cases, the second collector is used in order to collect a subpart of the final result, hence also called downstream collectors.

Comma separated string from the formatted list can be generated using Collectors.joining which collects the Stream to string as below:
String result =
    artists.stream()
              .map(Artist::getName)
              .collect(Collectors.joining(", ", "[", "]"));

Data Parallelism

Amdahl’s Law is a simple rule that predicts the theoretical maximum speedup of a program on a machine with multiple cores. If we take a program that is entirely serial and parallelize only half of it, then the maximum speedup possible, regardless of how many cores we throw at the problem, is 2×. Given a large number of cores—and we’re already into that territory—the execution time of a problem is going to be dominated by the serial part of that problem

When we have a Stream object, we can call its parallel method in order to make it parallel. If we’re creating a Stream from a Collection, we can call the parallelStream method in order to create a parallel stream from the get-go. Below is the example which calculates the total length of a sequence of albums in parallel.
public int parallelArraySum() {
    return albums.parallelStream()
                 .flatMap(Album::getTracks)
                 .mapToInt(Track::getLength)
                 .sum();
}

The kinds of problems that parallel stream libraries excel at are those that involve simple operations processing a lot of data, such as Simulations. Below is the example of parallel Monte Carlo simulation. Here we first use the IntStream range function to create a stream of size N, then call the parallel method in order to use the parallel version of the streams framework. The twoDiceThrows function simulates throwing two dice and returns the sum of their results, which is used on the data stream using the mapToObj method. All the simulation results are combined using the groupingBy collector. Then the numbers are mapped to 1/N and added using the summingDouble function.
public Map<integer double=""> parallelDiceRolls() {
    double fraction = 1.0 / N;
    return IntStream.range(0, N)
            .parallel()
            .mapToObj(twoDiceThrows())
            .collect(groupingBy(side -> side,
                                summingDouble(n -> fraction)));
}

When calling reduce the initial element could be any value, but for same operation to work correctly in parallel, it needs to be the identity value of the combining function. The identity value leaves all other elements the same when reduced with them. For example, if we’re summing elements with our reduce operation, the combining function is (acc, element) -> acc + element. The initial element must be 0, because any number x added to 0 returns x. Another caveat specific to reduce is that the combining function must be associative. This means that the order in which the combining function is applied doesn’t matter as long the values of the sequence aren’t changed.

The streams framework deals with any necessary synchronization itself, so there’s no need to lock the data structures. If we tried to hold locks on any data structure that the streams library is using, such as the source collection of an operation then it would likely cause issues. Stream also has a sequential method other than the parallel method. When a stream pipeline is evaluated, there is no mixed mode, i.e. the orientation is either parallel or sequential. If a pipeline has calls to both parallel and sequential, the last call wins. Under the hood, parallel streams back onto the fork/join framework. The fork stage recursively splits up a problem, were each chunk is operated upon in parallel and then the results are merged back together in the the join stage.

The common data sources from the core library can be split up into three main groups by performance characteristics:

The good: An ArrayList, an array, or the IntStream.range constructor. These data sources all support random access, which means they can be split up arbitrarily with ease.
The okay: The HashSet and TreeSet. These cannot be decomposed easily with perfect amounts of balance, but most of the time it’s possible to do so.
The bad: Some data structures don’t split well; for example, they may take O(N) time to decompose. Examples here include a LinkedList, which is computationally hard to split in half. Also, Streams.iterate and BufferedReader.lines have unknown length at the beginning, so it’s pretty hard to estimate when to split these sources.

Ideally, once the streams framework has decomposed the problem into smaller chunks, we’ll be able to operate on each chunk in its own thread, with no further communication or contention between threads.

Java 8 includes a couple of other parallel array operations that utilize lambda expressions outside of the streams framework These operations are all located on the utility class Arrays, which also contains a bunch of other useful array-related functionality from previous Java versions

Name Operation
parallelPrefix Calculates running totals of the values of an array given an arbitrary function
parallelSetAll Updates the values in an array using a lambda expression
parallelSort Sorts elements in parallel

The parallelSetAll method is used to easily initialize an array in parallel instead of using a for loop. An array is provided to operate on and a lambda expression, which calculates the value given the index. The array passed into the operation is altered, rather than creating a new copy.
public static double[] parallelInitialize(int size) {
      double[] values = new double[size];
      Arrays.parallelSetAll(values, i -> i);
      return values;
}

The parallelPrefix operation is useful for performing accumulation-type calculations over time series of data. It mutates an array, replacing each element with the sum (or any BinaryOperator) of that element and its predecessors. The below example takes a rolling window over a time series and produces an average for each instance of that window.
public static double[] simpleMovingAverage(double[] values, int n) {
      double[] sums = Arrays.copyOf(values, values.length);
      Arrays.parallelPrefix(sums, Double::sum);
      int start = n - 1;
      return IntStream.range(start, sums.length)
                  .mapToDouble(i -> {
                        double prefix = i == start ? 0 : sums[i - n];
                        return (sums[i] - prefix) / n;
                  })
            .toArray();
}


Sunday, May 10, 2015

EJB Interview Questions

What are benefits of using EJB ?
  • The development of EJB applications is easy as the business logic is separated by the application developer and at the same time the developer can utilize the services of the EJB container.
  • Application Server/ EJB container provides most of the system level services like transaction handling, logging, load balancing, persistence mechanism, exception handling and so on. Developer has to focus only on business logic of the application.
  • EJB container manages life cycle of ejb instances thus developer needs not to worry about when to create/delete ejb objects. 
  • The EJB architecture is compatible with other APIs like servlets and JSPs.
  • EJB allows to build applications using two different layered architectures: the traditional four-tier architecture and domain-driven design (using EJB3). Such isolation of components makes it easy to develop, deploy and manage EJB applications.

What are benefits of using EJB 3 ?
  • It offers seamless integration with other J2ee technologies and a complete stack of server solutions, including persistence, messaging, lightweight scheduling, remoting, web services, dependency injection (DI), and interceptors.
  • EJB 3 enables to develop an EJB component using POJOs and POJIs that know nothing about platform services.
  • EJB 3 allows us to use metadata annotations to configure a component instead of using XML deployment descriptors.
  • JNDI lookups have been turned into simple configuration using metadata-based dependency injection (DI). E.g. @EJB annotation injects EJB into the annotated variable.
  • EJB 3 components are POJOs, and can be easily be executed outside the container using testing frameworks such as JUnit or TestNG.

What are the different types of EJB ?
There are three types of EJB beans, Entity Bean, Session Bean and Message Driven Bean(MDB).

Session Beans: A session bean encapsulates business logic that can be invoked programmatically by a client over local, remote, or web service client views. A session bean instance is available only for the duration of a “unit of work” and is not persistent hence it does not survive a server crash or shutdown. There are two types of session beans: Stateful and Stateless Session Beans.
  • Stateful Session Bean: A stateful session bean automatically saves bean state between client invocations. In a stateful session bean, the instance variables represent the state of a unique client/bean session. Since the client interacts with the bean, such state is often called the conversational state. A session bean is not shared and it can have only one client. When the client terminates, its session bean appears to terminate and is no longer associated with the client. The state is retained for the duration of the client/bean session. If the client removes the bean, the session ends and the state disappears.
  • Stateless Session Bean: A stateless session bean does not maintain a conversational state with the client. When a client invokes the methods of a stateless bean, the bean’s instance variables may contain a state specific to that client but only for the duration of the invocation. When the method is finished, the client-specific state should not be retained. Clients although can change the state of instance variables in pooled stateless beans, and this state is held over to the next invocation of the pooled stateless bean. Except during method invocation, all instances of a stateless bean are equivalent, allowing the EJB container to assign an instance to any client. Because they can support multiple clients, stateless session beans can offer better scalability for applications that require large numbers of clients. A stateless session bean can implement a web service, but a stateful session bean cannot.
@Remote
public interface InventorySessionBeanRemote {
   //add business method declarations
}

@Stateful
public class InventorySessionBean implements InventorySessionBeanRemote {
   //implement business method 
}

@Stateless
public class InventorySessionBean implements InventorySessionBeanRemote {
   //implement business method 
}

Message Driven Beans: A message-driven bean is an enterprise bean that allows Java EE applications to process messages asynchronously. This type of bean normally acts as a JMS message listener which receives and processes JMS messages or other kinds of messages. Clients never invoke MDB methods directly. The message-driven beans unlike session beans are not accessed by the clients through interfaces, but directly using a bean class. Below are some aspects of MDB's:
  • A message-driven bean’s instances retain no data or conversational state for a specific client, hence are stateless.
  • All instances of a message-driven bean are equivalent, allowing the EJB container to assign a message to any message-driven bean instance. The container can pool these instances to allow streams of messages to be processed concurrently.
  • A single message-driven bean can process messages from multiple clients.
  • Message-driven beans are transaction aware and are relatively short-lived.
@MessageDriven(
   name = "QueueMessageHandler",
   activationConfig = {
      @ActivationConfigProperty( propertyName = "destinationType", 
                                 propertyValue = "javax.jms.Queue"),
      @ActivationConfigProperty( propertyName = "destination", 
                                 propertyValue ="/queue/InfoQueue")
   }
)
public class InventoryMessageBean implements MessageListener {
 
   // general purpose annotation used to inject anything that the container knows about.
   @Resource
   private MessageDrivenContext mdbContext;  
 
   // injects the dependency as ejb instance into another ejb
   @EJB
   InventoryPersistentBeanRemote inventoryBean;
 
   public InventoryMessageBean(){        
   }
 
   public void onMessage(Message message) {
   }
} 

Entity Beans: An entity bean is a simple POJO having mapping with table. They are used to model persistent data objects. The Entities model lower-level application concepts that high-level business processes manipulate. Entities are object oriented representations of the application data stored in the database and hence survives container crashes and shutdown. JPA entities support OO capabilities, including relationships between entities, inheritance, and polymorphism. Entity beans are transactional.
    The @Entity annotation marks the particular java class as an ejb entity and a persistent object representing the data-store record which is preferred to be serializable. The EntityManager interface reads the ORM metadata for an entity and performs persistence operations. It has the know how to add entities to the database, update stored entities, and delete and retrieve entities from the database. It also helps to execute queries using Query interface. The persistence unit is a reference to the data source in order to access the database. It contains the configuration such as the type of database  and if database tables should be automatically created by the persistence engine.

@Entity
@Table(name="items")
public class Item implements Serializable{
    
   private int id;
   private String name;

   public Item(){ }

   //mark id as primary key with autogenerated value
   //map database column id with id field
   @Id
   @GeneratedValue(strategy= GenerationType.IDENTITY)
   @Column(name="id")
   public int getId() {
      return id;
   }
   ...
}

@Stateless
public class InventoryPersistentBean implements InventoryPersistentBeanRemote {
 
   //pass persistence unit to entityManager.
   @PersistenceContext(unitName="EjbComponentPU")
   private EntityManager entityManager;         

   public void addItem(Item item) {
      entityManager.persist(item);
   }    

   public List<item> getItems() {        
      return entityManager.createQuery("From Items").getResultList();
   }
   ...
}

Explain the life cycle of EJB beans ?
Each type of enterprise bean (stateful session, stateless session, or message-driven) has a different lifecycle.

Lifecycle of a Stateful Session Bean:
  • The client initiates the lifecycle by obtaining a reference to a stateful session bean instance through dependency injection or JNDI lookup.
  • The container performs any dependency injection as specified by metadata annotations on the bean class or by the deployment descriptor.
  • It then calls the PostConstruct lifecycle callback interceptor method(s) for the bean, if any.
  • The container then returns the session object reference to the client. The instance is now in the method ready state and ready for client’s business methods.
  • In the ready stage, the EJB container may decide to deactivate, or passivate, the bean by moving it from memory to secondary storage. The EJB container invokes the method annotated @PrePassivate, if any, immediately before passivating it. If a client invokes a business method on the bean while it is in the passive stage, the EJB container activates the bean, calls the method annotated @PostActivate, if any, and then moves it to the ready stage.
  • At the end of the lifecycle, the client invokes a method annotated @Remove, and the EJB container calls the method annotated @PreDestroy, if any. The bean’s instance is then ready for garbage collection.
     
A business method is executed either in a transaction context or with an unspecified transaction context. When the session bean instance is included in a transaction, the container issues the afterBegin method on it before the business method and the instance becomes associated with the transaction. The container issues beforeCompletion before transaction commit and afterCompletion when the transaction completes.

Lifecycle of a Stateless Session Bean:
  • The container invokes the newInstance method on the session bean class to create a new session bean instance.
  • It then performs any dependency injection as specified by metadata annotations on the bean class or by the deployment descriptor.
  • The container then calls the @PostConstruct lifecycle callback interceptor methods for the bean, if any.
  • The container creates the instance of session bean which is then ready to be delegated a business method call from the clients or a call from the container to a timeout callback method.
  • When the container no longer needs the instance it invokes the @PreDestroy callback interceptor methods, if any. This ends the life of the stateless session bean instance, and the bean’s instance is ready for garbage collection.. 

Lifecycle of a Message Driven Bean:
The EJB container usually creates a pool of message-driven bean instances. For each instance, the EJB container performs following tasks.
  • If the message-driven bean uses dependency injection, the container injects these references before instantiating the instance.
  • The container calls the method annotated @PostConstruct, if any.
  • Like a stateless session bean, a message-driven bean is never passivated and has only two states: nonexistent and ready to receive messages.
At the end of the lifecycle, the container calls the method annotated @PreDestroy, if any. The bean’s instance is then ready for garbage collection.

Monday, April 27, 2015

Spring Interview Questions

What is the difference between Inversion of Control and Dependency Injection ?
Inversion of Control is a design in which the framework (or reusable library) calls the implementations provided by the application rather than having the application call the methods in a framework to carry out generic tasks. Here the control flow of a program is inverted, were the framework takes control of the program flow instead of the programmer. Hence the component can perform its task entirely by itself when all the necessary information or functionality is provided by application implementation. IoC allows to decouple the application into separate independent components. Such decoupling of components enables easier testing in isolation, reduced complexity and better maintenance by switching components.

One of the most common versions of IoC is Dependency Injection, where necessary object instances (functionality) are passed into an object through constructors, setters or service look-ups in order for the object to function independently. Each component must declare a list of dependencies required to perform it's task. At runtime a special component, generally an IoC Container performs binding between these components. It tries to provide values for published component dependencies.
           In other words we do not create our objects but describe how they should be created. We don’t directly connect our components and services together in code but describe which services are needed by which components in a configuration file. A IOC container is responsible for hooking it all up.

What are different types of Dependency Injections ?
There are three types of dependency injection
  • Constructor Injection (e.g. Pico container, Spring etc): Dependencies are provided as constructor parameters.
  • Setter Injection (e.g. Spring): Dependencies are assigned through JavaBeans properties (ex: setter methods).
  • Interface Injection (e.g. Avalon): Injection is done through an interface.
Spring supports only Constructor and Setter Injection.

What is a Spring autowiring ?
Autowiring enables Spring to inject dependencies without having to specify those explicitly. Spring inspects bean factory contents and establishes relationships amongst collaborating beans. The Spring container can autowire relationships between collaborating beans without using <constructor-arg> and <property> elements. The autowire attribute of the <bean/> element specifies the autowire mode for a bean definition. Below are autowiring modes which instruct Spring container to use autowiring for dependency injection.

Spring Autowiring Modes
Mode Description
no It is default setting which means no autowiring.
byName Autowiring by property name. Spring container matches the properties of the beans on which autowire attribute is set to byName in XML config with the beans defined by the same names in the configuration file.
byType Autowiring by property datatype of the property. Spring container matches the properties of the beans on which autowire attribute is set to byType in the XML config with the type which matches with exactly one of the beans name in configuration file. If more than one such beans exists, a fatal exception is thrown.
constructor Similar to byType, but type applies to constructor arguments. If there is not exactly one bean of the constructor argument type in the container, a fatal error is raised.
autodetect Spring first tries to wire using autowire by constructor, if it does not work, Spring tries to autowire by byType.

Autowiring can be specified in bean classes also using @Autowired annotation. To use @Autowired annotation in bean classes, we must first enable the annotation in spring application using <context:annotation-config /> configuration. The @Autowired annotation can then be used to autowire bean on the setter method, constructor, a property or methods with arbitrary names and/or multiple arguments. By default, the @Autowired annotation implies the dependency is required similar to @Required annotation, however, you can turn off the default behavior by using (required=false) option with @Autowired. Simple properties such as primitives, Strings, and Classes though cannot be autowired.

What are various Bean scopes in Spring ? Which is the default Bean scope ?
By default, all Spring beans are singletons. The prototype scope is used to create a new bean instance each time.

Spring Bean Scopes
Scope Description
singleton Scopes the bean definition to a single instance per Spring container (default).
prototype Allows a bean to be instantiated any number of times (once per use).
request Scopes a bean definition to an HTTP request. Only valid when used with a webcapable Spring context (such as with Spring MVC).
session Scopes a bean definition to an HTTP session. Only valid when used with a webcapable Spring context (such as with Spring MVC).
global-session Scopes a bean definition to a global HTTP session. Only valid when used in a portlet context.

Does Singleton bean from Spring Container is thread safe ?
Spring framework does not do anything under the hood concerning the multi-threaded behavior of a singleton bean. It is the developer's responsibility to deal with concurrency issue and thread safety of the singleton bean. Also all the spring scopes are enforced during the creation of the spring bean.

Describe Spring Bean Life Cycle ?
A Spring Bean represents a POJO component performing some useful operation. All Spring Beans reside within a Spring Container also known as IOC Container. The Spring Framework is transparent and thereby hides most of the complex infrastructure and the communication that happens between the Spring Container and the Spring Beans. This section lists the sequence of activities that will take place between the time of Bean Instantiation and hand over of the Bean reference to the Client Application.
  1. The Bean Container finds the definition of the Spring Bean in the Configuration file.
  2. The Bean Container creates an instance of the Bean using Java Reflection API.
  3. If any properties are mentioned, then they are also applied. If the property itself is a Bean, then it is resolved and set.
  4. If the Bean class implements the BeanNameAware interface, then the setBeanName() method will be called by passing the name of the Bean.
  5. If the Bean class implements the BeanClassLoaderAware interface, then the method setBeanClassLoader() method will be called by passing an instance of the ClassLoader object that loaded this bean.
  6. If the Bean class implements the BeanFactoryAware interface, then the method setBeanFactory() will be called by passing an instance of BeanFactory object.
  7. If there are any BeanPostProcessors object associated with the BeanFactory that loaded the Bean, then the method postProcessBeforeInitialization() will be called even before the properties for the Bean are set.
  8. If the Bean class implements the InitializingBean interface, then the method afterPropertiesSet() will be called once all the Bean properties defined in the Configuration file are set.
  9. If the <bean> definition in the configuration file contains a init-method attribute, then the value for the attribute will be resolved to a method name in the Bean class and that method will be called immediately upon instantiation.
  10. The postProcessAfterInitialization() method will be called if there are any Bean Post Processors attached for the Bean Factory object.
  11. If the Bean class implements the DisposableBean interface, then the method destroy() will be called when the Application no longer needs the bean reference.
  12. If the <bean> definition in the configuration file contains a destroy-method attribute, then the corresponding method definition in the Bean class will be called just before the bean is removed from the container.
@PostConstruct and @PreDestroy annotations are not exclusive to Spring, but they are a standard and consequently widely used in many container managed environments inclusing spring. @PostConstruct annotation defines a method that will be called after a bean as been fully initialized. In other words it will be called after bean construction and all dependency injection. @PreDestroy annotation defines a method that will be called just before a bean is destroyed, which is usually useful for resource clean up.

 


How to inject collections using spring ? How to inject collections using annotations ?
Spring offers four types of collection configuration elements to inject Java Collection types List, Set, Map, and Properties which are as follows:

Element Description
<list> It helps in wiring ie injecting a list of values, allowing duplicates.
<set> It helps in wiring a set of values but without any duplicates.
<map> It is used to inject a collection of name-value pairs where name and value can be of any type.
<props> It is used to inject a collection of name-value pairs where the name and value are both Strings.

Either <list> or <set> can be used to wire any implementation of java.util.Collection or an array.

How are properties configured in Spring ?
The PropertyPlaceholderConfigurer class is used to access the properties from properties file into bean configuration using a special variable format ${variable}.
   <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
      <property name="location">
            <value>database.properties</value>
      </property>
   </bean>
@Configuration
@PropertySource("classpath:root/test.props")
public class SampleConfig {

 @Value("${test.prop}")
 private String attr;
 
 @Bean
 public SampleService sampleService() {
  return new SampleService(attr);
 }

 @Bean
 public PropertySourcesPlaceholderConfigurer placeHolderConfigurer() {
  return new PropertySourcesPlaceholderConfigurer();
 }
}

What is a Validator and How to use it ?
Something.

What is the difference between @Autowire and @Inject ?
@Inject: It is part of javax.inject package. The inject annotation can be used on fields, constructors and setter methods. It is used for dependency injection when the bean instance is instantiated by the container.

@Autowired: It is the part of Spring's org.springframework.bean.factory package. It used for dependency injection using on fields, constructors and setter methods.

@Resource: It is part of javax.annotation package. It can be used on fields or setter methods. The @Resource annotation takes a 'name' attribute which will be interpreted as the bean name to be injected, thus following by-name autowiring semantics.

@Qualifier: It is also the part of JSR 330's javax.inject package and also part of Spring's org.springframework.beans.factory package. It is used to wire the bean based on the property name when more than one bean of the same type exists in the container.

What is Aspect Oriented Programming ?
Aspect oriented programming provides modularity using aspects and by cross-cutting concerns. A cross-cutting concern is a concern that should be centralized in one location in code as possible, such as transaction management, authentication, logging, security etc. AOP recommends to abstract and encapsulate crosscutting concerns.
  The Concern is behavior expected to have in a module of an application and is defined as a functionality to implement.The cross-cutting concern is a concern which is applicable throughout the application and it affects the entire application.

Jointpoint: It defines were we want to execute the code. A joinpoint is a point in the execution of the application where an aspect can be plugged in. Such point could be a method being called, an exception being thrown, or even a field being modified. These are the points where the aspect’s code can be inserted into the normal flow of the application to add a new behavior. Spring supports only method execution join point.

Advice: Advice defines both the what and the when of an aspect. Advice represents an action taken by an aspect at a particular join point.  There are different types of advices depending upon the position they are called in a program, such as before, after or around:

Before Advice: It executes before a join point. It does not have the ability to interrupt the execution flow proceeding at the joint point unless it throws an exception
After Returning Advice: it executes after a joint point completes normally.
After Throwing Advice: it executes if method exits by throwing an exception.
After (finally) Advice: it executes after a join point regardless of join point exit whether normally or exceptional return.
Around Advice: It executes before and after a join point.

Pointcut: It is a collection of joint points. Pointcuts help to narrow down the joinpoints advised by an aspect. A pointcut definition matches one or more joinpoints at which advice should be woven. It is an expression language of AOP that matches join points.
Pointcuts are specified using explicit class and method names or through regular expressions that define matching class and method name patterns. Spring supports union, and intersection operation on pointcuts. Union means the method that either pointcut matches, while Intersection means the method that both pointcuts match.

Aspect: An aspect is a class which contains advices and pointcuts.
<bean id="minstrel“ class="com. knight.Minstrel"/>
<aop:config>
   <aop:aspect ref="minstrel">
      <aop:pointcut id="questPointcut" expression="executing  embarkOnQuest() " />
      <aop:before method="singBefore“ pointcut-ref="questPointcut“ arg-names="bean" />
      <aop:after-returning method="singAfter“ pointcut-ref="questPointcut“ arg-names="bean" />
   </aop:aspect>
</aop:config> 
 
The AspectJ annotations can be integrated with Spring AOP framework to allow easy method interception. AOP proxy is an object created by the AOP framework in order to implement the aspect contracts such as advise method executions. Once spring AOP is enable it automatically generate a proxy for the bean on which to intercept method invocations and ensure that corresponding advice gets executed as needed. @AspectJ Support is enabled using Java and XML configuration as below:
<aop:aspectj-autoproxy/>
@Configuration
@EnableAspectJAutoProxy
public class AppConfig {
}

Common AspectJ annotations are as below:
@Before – Run before the method execution
@After – Run after the method returned a result
@AfterReturning – Run after the method returned a result, intercept the returned result as well.
@AfterThrowing – Run after the method throws an exception
@Around – Run around the method execution, combine all three advices above.

@Aspect
public class LoggingAspect {
 
   @AfterReturning(
      pointcut = "execution(* com.sample.customer.Customer.addCustomer(..))",
      returning= "result")
   public void logAfterReturning(JoinPoint joinPoint, Object result) {

      System.out.println("Method Name : " + joinPoint.getSignature().getName());
      System.out.println("Method Return Value : " + result);
   }
 
}
....
public class Customer {

  public String addCustomer(){
      // Add customer business logic
      return "customer";
  }
}


What is weaving ? What are the different points where weaving can be applied ?
Weaving is the process of linking aspects with other application types or objects to create an advised object. Weaving can be done at compile time, at load time, or at runtime. Spring AOP, like other pure Java AOP frameworks, performs weaving at runtime.

What is dependency checking ?
The Spring IoC container has the ability to check for the existence of unresolved dependencies of a bean inside the container. This feature ensures that all properties (or all properties of a certain type) are set on a bean. In many cases though the bean has default values for the properties, or some properties do not apply to all usage scenarios, limiting the usage of this feature. Dependency checking can be enabled or disabled on per bean basis. By default dependency checking is disabled. Dependency checking can be handled in several different modes as below.

none – No dependency checking. Hence bean properties which have no value specified are simply not set.
simple – Dependency checking is performed for primitive types and collections (map, list)..
objects – Dependency checking is performed for properties of object type i.e. collaborators.
all – Dependency checking is done for properties of any type i.e. collaborators, primitive types and collections.

In XML-based configuration the 'dependency-check' attribute is specified in the bean definition.
 <bean id="CustomerBean" class="com.entity.Customer" 
         dependency-check="simple">
 
  <property name="person" ref="PersonBean" />
  <property name="action" value="buy" />
 </bean>

@Required Annotation: In most scenarios we need to check that a particular property has been set instead of all the properties of certain types (primitive, collection or object). The @Required Annotation is used to enforce such checking, which makes sure that annotated property has been set. In order to apply the @Required annotation the RequiredAnnotationBeanPostProcessor must be enabled using "<context:annotation-config />" in the bean configuration file. The Required annotation is more flexible than dependency checking in XML file, because it can apply to a particular property only.

Which settings are inherited by a child bean from its parent bean ?
In Spring, the inheritance is supported in bean configuration for a bean to share common values, properties or configurations. A child bean can inherit its parent bean configurations, properties and some attributes. A child bean definition will inherit constructor argument values, property values and method overrides from the parent, with the option to add new values. If init method, destroy method and/or static factory method are specified, they will override the corresponding parent settings. The remaining settings will always be taken from the child definition: depends on, autowire mode, dependency check, singleton, lazy init.

Which is the central IoC container interface in Spring IoC ?
The org.springframework.beans.factory.BeanFactory is the actual representation of the Spring IoC container that is responsible for containing and otherwise managing the beans. The BeanFactory interface is the central IoC container interface in Spring. Its responsibilities include instantiating or sourcing application objects, configuring such objects, and assembling the dependencies between these objects.

Difference between BeanFactory and ApplicationContext ?
ApplicationContext interface is derived from the BeanFactory interface and hence includes all functionality of the BeanFactory such Bean instantiation and wiring. It also provides some of the other features as below and hence is recommended over BeanFactory:
  • Automatic BeanPostProcessor registration
  • Automatic BeanFactoryPostProcessor registration
  • Convenient MessageSource access and thus provides internationalization support (i18n)
  • ApplicationEvent publication

How do you make a bean to lazy load in ApplicationContext which loads all beans eagerly during startup ?
A bean is loaded only when an instance of that Java class is requested by any other method or a class. BeanFactory (and subclasses) container loads beans lazily. ApplicationContext container follows a pre-loading methodology, were all beans are instantiated as soon as the spring configuration is loaded by a container. The "default-lazy-init" attribute of the beans element tells the application context if its needs to lazily load the beans. The default value of the attribute is false which makes the ApplicationContext non lazily load the beans.
<beans default-lazy-init="true">
        <!-- all your beans -->
</beans>

What are interceptors available in Spring MVC ?
Spring provides interceptors in order to intercept the HTTP Request and carry out processing before handing it over to the controller handler methods. The Spring interceptors implement the HandlerInterceptor interface or by override abstract class HandlerInterceptorAdapter that provides the base implementation of this interface. HandlerInterceptor declares three methods based on where we want to intercept the HTTP request.
  • The preHandle method is used to intercept the request before it’s handed over to the handler method. It returns true when wants to process the request through another interceptor or to send it to handler method if there are no further interceptors, else returns false when no further processing is needed. 
  • The postHandle method is called when HandlerAdapter has invoked the handler but DispatcherServlet is yet to render the view. This method can be used to add additional attribute to the ModelAndView object to be used in the view pages.
  • The afterCompletion is a callback method that is called once the handler is executed and view is rendered.

Describe the life cycle of Spring MVC Request ?

Spring defines a single front controller known as the DispatcherServlet which handles all the requests to the web application. The DispatcherServlet is declared in the web.xml and all the requests to be handled by DispatcherServlet are mapped using the corresponding URL mapping. The DispatcherServlet’s job is to send the request on to a Spring MVC controller, were an application may have several controllers. The DispatcherServlet consults one or more handler mappings which determine appropriate controller based on the request URL. Handler mappings typically map a specific controller bean to a URL pattern, and implement the HandlerMapping interface along with the Ordered interface to indicate the precedence. DispatcherServlet then sends the request on to its chosen controller.
        At the controller, the request’s payload is dropped off and the information is processed. The controller logic comes up with the result after processing the model. The controller then packages up the model data and the name of a view for display into a ModelAndView object. The ModelAndView object contains model data and the logical name to be used to lookup for actual html/jsp view. The controller sends the request and the ModelAndView object back to the DispatcherServlet.
        The DispatcherServlet asks a View Resolver to help find the actual JSP View. The view resolver uses the logical view name in the ModelAndView object returned from a controller to look up a View bean to render results to the user. A view resolver is any bean that implements the ViewResolver interface. The DispatcherServlet  then delivers the model data to the View implementation thus completing the request’s job. The view also has access to the request variables added in the ModelMap which contains information about the model, inside the ModelAndView container.

The @Controller annotation indicates that a particular class serves the role of a controller without implementing Controller interface. Spring does not require you to extend any controller base class or reference the Servlet API. @RequestMapping annotation is used to map a URL to either an entire class or a particular handler method.




Why Spring MVC is better than Struts2 ?
  • Spring MVC is loosely coupled framework whereas Struts is tightly coupled.
  • Spring provides a very clean division between controllers, JavaBean models, and views.
  • In Struts 2 Actions are newly instantiated every time a request is made, whereas in Spring MVC the default behavior is to act as a Singleton. Spring MVC Controllers are created once and held in memory/shared across all requests.
  • Spring’s MVC is very flexible. Unlike Struts, which forces the Action and Form objects into concrete inheritance (thus they cannot inherit from any other class), Spring MVC is entirely based on interfaces. Furthermore, just about every part of the Spring MVC framework is configurable via plugging in custom interface. Convenience classes are also provided by spring as an implementation option.
  • Spring MVC is truly view-agnostic. We can use JSP else can use any other view technologies including Velocity and XLST. Also custom view mechanism can also be used by implementing the Spring View interface for integration.
  • Struts provides very specific tags which allow the request parameters to bind with Action Form fields and show binding/validation errors. Spring MVC, on other end has one simple bind tag that handles everything, making the JSP pages smaller and have more pure HTML content.
  • Spring Controllers are configured via IoC like any other objects. This makes them easy to test, and beautifully integrated with other objects managed by Spring.
  • Spring MVC web tiers are typically easier to test than Struts web tiers, due to the avoidance of forced concrete inheritance and explicit dependence of controllers on the dispatcher servlet.
  • Using Spring MVC, the web tier becomes a thin layer on top of a business object layer. Struts on the other hand leave us on our own in implementing our business objects. Spring provides an integrated framework for all tiers of the application.
  • Unlike Struts, Spring MVC have no ActionForms and it binds directly to domain objects.
  • Spring MVC code is more testable since validation has no dependency on Servlet API.
  • Struts imposes dependencies on application Controllers since they must extend a Struts class. Spring MVC on the other hand doesn’t force this, although there are convenience Controller implementations that can be choosen to extend.
  • Spring offers better integration with view technologies other than JSP (Velocity / XSLT / FreeMarker / XL etc.)

What is Spring Web Flow ? How does it work ?
Spring web flow is an extension to Spring MVC, which provides the ability to define an application’s flow external to the application’s logic and create reusable flows that can be used across multiple applications.
           The FlowController is a Spring MVC controller which acts as a front controller for Spring Web Flow applications. It is responsible for handling all requests pertaining to a flow. The flowExecutor property is the only mandatory property wired with a flow executor, which ultimately carries out the steps described in the flow. The Flow Executor keeps track of all of the flows that are currently being executed and directs each flow to the state they should go next. A mapping URL is used to interact with the Spring Web Flow and is configured to the FlowController using handler mappings such as SimpleUrlHandlerMapping. The flow registry is effectively a librarian that curates a collection of flow definitions. The flow executor asks the flow registry for the flow when it needs a flow.
          There are three main elements that make up an application flow: states, events, and transitions. States are points in a flow where some activity takes place e.g. Action, Decision, Start, End, Subflow and View. View and Action states represents the user's and application’s side of conversation respectively. Once a state has completed, it fires an event, which is simply a String value indicating the outcome of the state. The event fired by a state should be mapped to a transition to serve any purpose. Transitions defines the actual flow unlike the state which defines an activity within a flow. They indicate which state the flow should go to next.

What is WebApplicationContext ?
The WebApplicationContext is an extension of the plain ApplicationContext and has some extra features such resolving themes, determining associated servlet necessary for web applications.

What is JdbcTemplate in Spring ? What is benefit of using JdbcTemplate ?
The JdbcTemplate class is the central class in the JDBC core package. It simplifies the use of JDBC since it handles the creation and release of resources. This helps to avoid common errors such as forgetting to always close the connection. It executes the core JDBC workflow like statement creation and execution, leaving application code to provide SQL and extract results. This class executes SQL queries, update statements or stored procedure calls, imitating iteration over ResultSets, convert database data into primitives or objects and extraction of returned parameter values. It also catches JDBC exceptions and translates them to the generic, more informative, exception hierarchy defined in the org.springframework.dao package.

What types of transaction management does Spring support ?
There are two types of transaction management supported by Spring, as below:

Programmatic transaction management: It allows to manage the transaction with the help of programming in your source code. That gives you extreme flexibility, but it is difficult to maintain.
       To implement programmatic approach we need an instance of TransactionDefinition with the appropriate transaction attributes. The DefaultTransactionDefinition can be used which has default transaction attributes. The transaction can be started by calling getTransaction() method on PlatformTransactionManager and passing the reference of TransactionDefinition, which returns an instance of TransactionStatus. The TransactionStatus objects helps in tracking the current status of the transaction. If everything goes fine then we can use commit() method of PlatformTransactionManager to commit the transaction, otherwise we use rollback() to rollback the complete operation.
      TransactionDefinition def = new DefaultTransactionDefinition();
      TransactionStatus status = transactionManager.getTransaction(def);

      try {
         String sqlInsertQuery = "insert into Student (name, age) values (?, ?)";
         jdbcTemplateObject.update(sqlInsertQuery, name, age);

         // Get the latest student id to be used in Marks table
         String sqlGetIdQuery = "select max(id) from Student";
         int sid = jdbcTemplateObject.queryForInt( sqlGetIdQuery );

         sqlInsertQuery = "insert into Marks(sid, marks, year) values (?, ?, ?)";
         jdbcTemplateObject.update( sqlInsertQuery, sid, marks, year);

         System.out.println("Created Name = " + name + ", Age = " + age);
         transactionManager.commit(status);
      } catch (DataAccessException e) {
         System.out.println("Error in creating record, rolling back");
         transactionManager.rollback(status);
         throw e;
      }

Declarative transaction management: Declarative transaction management approach allows to manage the transaction with the help of configuration instead of hard coding it in the source code. It separates the transaction management from the business code and uses only annotations or XML configuration to manage the transactions. Below are the steps associated with declarative transaction:
  • We use <tx:advice /> tag, which creates a transaction-handling advice and same time we define a pointcut that matches all methods we wish to make transactional and reference the transactional advice.
  • If a method name has been included in the transactional configuration then created advice will begin the transaction before calling the method.
  • Target method will be executed in a try-catch block.
  • If the method finishes normally, the AOP advice commits the transaction successfully otherwise it performs a rollback.
   <tx:advice id="txAdvice"  transaction-manager="transactionManager">
      <tx:attributes>
      <tx:method name="create"/>
      </tx:attributes>
   </tx:advice>
 
   <aop:config>
      <aop:pointcut id="createOperation" expression="execution(* com.abc.StudentJDBCTemplate.create(..))"/>
      <aop:advisor advice-ref="txAdvice" pointcut-ref="createOperation"/>
   </aop:config>
 
   <!-- Initialization for TransactionManager -->
   <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
      <property name="dataSource"  ref="dataSource" />    
   </bean>

What is Spring Integration ?
Spring intergration is framework build upon core spring and provides integration solutions for event-driven and messaging-centric architectures.It provides a layered architecture and interface-based contracts between layers. The messaging system follows the pipes-and-filters model. The "filters" represent any component that is capable of producing and/or consuming messages, and the "pipes" transport the messages between filters so that the components themselves remain loosely-coupled. A Message is a generic wrapper which consists of a payload and headers.

How did you configure caching using spring ?
Spring 3.1 provides support for transparently adding caching into an existing Spring application, thus allowing consistent use of various caching solutions. Spring provides two Java annotations for the caching declaration: @Cacheable and @CacheEvict, which allow methods to trigger cache population or cache eviction. The cache:annotation-driven element is defined for xml configuration in the application context. The @Cacheable annotation is used in front of methods that are cacheable – that is, methods for whom the result is stored into the cache so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method. If only the value of the cache is specified in the @Cacheable annotation in front of the method, then the default key generation or the one specified as default will kick in. The developer can also use SpEL to pick the arguments of interest, perform operations or even invoke arbitrary methods to specify the custom key generation using the key attribute. The @CacheEvict annotation marks the methods that perform cache eviction, that is methods that act as triggers for removing data from the cache. @CacheEvict requires one to specify one (or multiple) caches that are affected by the action, allows a key or a condition to be specified but in addition, features an extra parameter allEntries which indicates whether a cache-wide eviction needs to be performed rather then just an entry one (based on the key).
<!-- CacheManager manages and controls all the Cache -->
<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager">
 <property name="cacheManager" ref="ehcache"/>
</bean>
<bean id="ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean">
 <property name="configLocation" value="classpath:config/ehcache.xml"/>
 <property name="shared" value="true"/>
</bean>

<!-- Configuration in ehcache.xml -->
<ehcache xsi:noNamespaceSchemaLocation="ehcache.xsd" 
   updateCheck="true" 
   monitoring="autodetect" 
   dynamicConfig="true"
   maxBytesLocalHeap="150M">
   
 <diskStore path="java.io.tmpdir"/> 
 
 <cache name="searchResults"
       maxBytesLocalHeap="100M"
       eternal="false"
       timeToIdleSeconds="300"
       overflowToDisk="true"
       maxElementsOnDisk="1000"       
       memoryStoreEvictionPolicy="LRU"/>       
 
 <cache name="podcasts"
       maxBytesLocalHeap="40M"
       eternal="false"
       timeToIdleSeconds="300"
       overflowToDisk="true"
       maxEntriesLocalDisk="1000"
       diskPersistent="false"
       diskExpiryThreadIntervalSeconds="120"
       memoryStoreEvictionPolicy="LRU"/>         
 
 <cache name="referenceData"
       maxBytesLocalHeap="5M"
       eternal="true"
       memoryStoreEvictionPolicy="LRU">
       <pinning store="localMemory"/>
  </cache>
</ehcache>