Preparing for Java Interview.. well here is the link…


Preparing for any interview is one of the toughest job in this universe.  I would like to share one reference link with you all. Hope it helps 🙂

R Vashi


Welcome to Java7 Quick tour


While browsing some information about Java 7, I came across a very nice blog covering few major changes in Java 7. Would like to share with all my blog users. Here it goes..

The first example will show switch with String, previously this functionality was only possible  with Enums and integer values. In actual fact the JDK retrieves the hashcode for the String which is an integer. Below is an example of this feature.

String drink=”coffee”;
switch (drink){
case “coffee”:
System.out.println(“So you need milk”);
case “juice”:
System.out.println(“So you need sugar”);
case “refrigerate”:
System.out.println(“So you need ice”);
System.out.println(“unknown drink “);
I will now show you the ARM, Automatic Resource management, you don’t need to concern yourself with the resources that will be used in your program because it will automatically close when it exits the Try block. For this just implement the interface java.lang.AutoCloseable, the only method is the Close, The AutoCloseable is the better option than Closeable because an exception is not thrown when you close the resource, in the second picture we can see this.
public void copyFile(File original, File copy) throws FileNotFoundException, IOException {
try (
InputStream in = new FileInputStream(original);
OutputStream out = new FileOutputStream(copy)) {
byte[] buf = new byte[1024];
int n;
while ((n = >= 0) {
out.write(buf, 0, n);
}// it is automatically close
The multi-try, for some people is the most important feature in this version, it now allows many exceptions inside the catch block just separate with a “|” pipe.
ExemploARM arm=new ExemploARM();
try {
arm.copyFile(origem, destino);
catch (FileNotFoundException | IOException ex) {
System.out.println(“It’s can’t copy file”);
using multy-try
In Java 7 there are some improvements to Generics and collections making it easy to make this type Object. Now it is possible to make generic collections easily with the  diamond operator “<>”
List<Object> diamond=new ArrayList<>(); // diamond
List<Drink> Drinks;
Map<String, List<Drink>> maps=new HashMap<>();
maps.put(“diamond”, drinks=new ArrayList<>() );
maps.put(“other example”, new ArrayList<Bebida>() );
maps.put(“erro”, new ArrayList<>() );
[/code] Picture 4: diamond
Talking more about generic collections there is the annotation @SafeVarargs for ensuring this method is safe.
 Applying this annotation to a method or constructor suppresses unchecked warnings about a non-reifiable variable-arity (vararg) type and suppresses unchecked warnings about parameterized array creation at call sites.
static List asList (T… elements) {
return null;
static void varags(List… stringLists) {
Object[] array = stringLists;
List tmpList = Arrays.asList(42);
array[0] = tmpList; //run with warning
String s = stringLists[0].get(0); // ClassCastException
The digit separator allows for good understand when writing big numbers in java code, the only rule is you can’t separate the last and the first number, now you can write separator numbers with the character “_” it is also possible to write Double values and Float values, for example, for the JDK is equals 22 and 2_2. There is also literal in binary, which is most important when programming in embedding devices,  just put “ob” (zero and b) in front of a number, this Features can also use the separator.
long longPrimitive=9_999_999_99;
Long longObjete=9__3234_300l;
double doublePrimitive=232_32.32_12d;
Double doubleObjeto=88_32.32_12d;
int binA=0b01_01;
int binB=0b0101_0111;
System.out.println(“equals values”);
System.out.println(“equals binary values”);
picture 5: using separator and literal binary.
Other feature interesting is try with resource now it possible instantiate one variable if it does not generate an exception.
BufferedWriter writer=null;
try {
writer = Files.newBufferedWriter(arquivo, charset);
writer.write(s, 0, s.length());
catch (IOException x) {
System.err.format(“IOException: %s%n”, x);
Picture 6: before was necessary create the variable
try (BufferedWriter writer = Files.newBufferedWriter(file, charset)) {
writer.write(s, 0, s.length());
catch (IOException x) {
System.err.format(“IOException: %s%n”, x);
Picture 7: after using try with resource in java 7 

Some more features..



R Vashi

javax.persistence.PersistenceException: org.hibernate.PersistentObjectException: detached entity passed to persist


One of the issues to get your head around in both Hibernate and JPA is how to handle detached entities. In Hibernate one has to deal with the session object and in JPA it is called the persistence context.

An object when loaded in the persistence context is managed by JPA/Hibernate. You can force an object to be detached (ie. no longer managed by Hibernate) by closing the EntityManager or in a more fine-grained approach by calling the detach() method.

So it is very time consuming to debug when you face “Detached entity” exception being thrown by JPA/HIbernate. THere are few possible things you should look for.

1. See if you trying to persist or merge an entity which has the same id as another entity, and which is already present in the PersistenceContext.

2. See if you you’ve specified that @Id is GENERATED by Hibernate. Do not set an ID before you save/persist it. Hibernate looks at the Entity you’ve passed in and assumes that because it has its PK populated that it is already in the database.
save() and persist() do almost the same things with slightly different semantics . persist() is JPA compliant and save() is a carryover from the original Hibernate. Mainly, save() returns the PK and persist() does not. However, both will generate the PK before the actual SQL INSERT happens (if the PK is generated and not assigned).

One workaround I did to solve this was to first find and then save the entity. See below example.

@PersistenceContext(unitName = “JPAUnit”)
private EntityManager em;

public void saveDetails(EntityManager em, User user){
em.find(User.class, user.getId());


R Vashi

JPA/Hibernate:- java.lang.IllegalStateException: No data type for node


Few days back I got struck with one of the hiberante exception I was facing while running one named query.

Exception Details:

Exception in thread "main" java.lang.IllegalStateException: No data type for node: org.hibernate.hql.ast.tree.IdentNode
 \-[IDENT] IdentNode: 'dpt' {originalText=dpt}

As I haven’t work as that extensive on JPA, due to that I was not able to catch early the root cause of this exception 😦 , After carefully examining around all the entity classes, Then I came to know about the orgin of this issue.

Named Query which was used to fetch the department details.(The below query scenario represent imaginary situation)

select from Dept dept

The root cause was the alias name being used for the Entity reference. Alias name ‘dept‘ should have been used in the SELECT Clause of the HQL. Where as it was referring to ‘dpt‘ and was causing this exception.

Right named query:

select from Dept dept

Hope this helps.

R Vashi

10 Tips for Proper Application Logging


Our latest JCP partner, Tomasz Nurkiewicz, has submitted a number of posts describing the basic principles of proper application logging. I found them quite interesting, thus I decided to aggregate them in a more compact format and present them to you. So, here are his suggestions for clean and helpful logs: (NOTE: The original posts have been slightly edited to improve readability)


1) Use the appropriate tools for the job

Many programmers seem to forget how important is logging an application’s behavior and its current activity. When somebody puts:

1“Happy and carefree logging”);

happily somewhere in the code, he probably doesn’t realize the importance of application logs during maintenance, tuning and failure identification. Underestimating the value of good logs is a terrible mistake. In my opinion, SLF4J is the best logging API available, mostly because of a great pattern substitution support:


1log.debug(“Found {} records matching filter: ‘{}'”, records, filter);


In Log4j you would have to use:

1 log.debug(“Found ” + records + ” records matching filter: ‘” + filter + “‘”);

This is not only longer and less readable, but also inefficient because of extensive use of string concatenation. SLF4J adds a nice {} substitution feature. Also, because string concatenation is avoided and toString() is not called if the logging statement is filtered, there is no need for isDebugEnabled() anymore. BTW, have you noticed single quotes around filter string parameter? SLF4J is just a façade. As an implementation I would recommend the Logback framework, already advertised, instead of the well established Log4J. It has many interesting features and, in contrary to Log4J, is actively developed. The last tool to recommend is Perf4J. To quote their motto:

Perf4J is to System.currentTimeMillis() as log4j is to System.out.println() I’ve added Perf4J to one existing application under heavy load and seen it in action in few other. Both administrators and business users were impressed by the nice graphs produced by this simple utility. Also we were able to discover performance flaws in no time. Perf4J itself deserves its own article, but for now just check their Developer Guide. Additionally, note that Ceki Gülcü (founder of the Log4J, SLF4J and Logback projects) suggested a simple approach to get rid of commons-logging dependency (see his comment).

2) Don’t forget, logging levels are there for you

Every time you make a logging statement, you think hard which logging level is appropriate for this type of event, don’t you? Somehow 90% of programmers never pay attention to logging levels, simply logging everything on the same level, typically INFO or DEBUG. Why? Logging frameworks have two major benefits over System.out., i.e. categories and levels. Both allow you to selectively filter logging statements permanently or only for diagnostics time. If you really can’t see the difference, print this table and look at it every time you start typing “log.” in your IDE: ERROR – something terribly wrong had happened, that must be investigated immediately. No system can tolerate items logged on this level. Example: NPE, database unavailable, mission critical use case cannot be continued. WARN – the process might be continued, but take extra caution. Actually I always wanted to have two levels here: one for obvious problems where work-around exists (for example: “Current data unavailable, using cached values”) and second (name it: ATTENTION) for potential problems and suggestions. Example: “Application running in development mode” or “Administration console is not secured with a password”. The application can tolerate warning messages, but they should always be justified and examined. INFO – Important business process has finished. In ideal world, administrator or advanced user should be able to understand INFO messages and quickly find out what the application is doing. For example if an application is all about booking airplane tickets, there should be only one INFO statement per each ticket saying “[Who] booked ticket from [Where] to [Where]”. Other definition of INFO message: each action that changes the state of the application significantly (database update, external system request). DEBUG – Developers stuff. I will discuss later what sort of information deserves to be logged.

TRACE – Very detailed information, intended only for development. You might keep trace messages for a short period of time after deployment on production environment, but treat these log statements as temporary, that should or might be turned-off eventually. The distinction between DEBUG and TRACE is the most difficult, but if you put logging statement and remove it after the feature has been developed and tested, it should probably be on TRACE level. The list above is just a suggestion, you can create your own set of instructions to follow, but it is important to have some. My experience is that always everything is logged without filtering (at least from the application code), but having the ability to quickly filter logs and extract the information with proper detail level, might be a life-saver. The last thing worth mentioning is the infamous is*Enabled() condition. Some put it before every logging statement:

1 if(log.isDebugEnabled())

2 log.debug(“Place for your commercial”);

Personally, I find this idiom being just clutter that should be avoided. The performance improvement (especially when using SLF4J pattern substitution discussed previously) seems irrelevant and smells like a premature optimization. Also, can you spot the duplication? There are very rare cases when having explicit condition is justified – when we can prove that constructing logging message is expensive. In other situations, just do your job of logging and let logging framework do its job (filtering).

3) Do you know what you are logging?

Every time you issue a logging statement, take a moment and have a look at what exactly will land in your log file. Read your logs afterwards and spot malformed sentences. First of all, avoid NPEs like this:

1 log.debug(“Processing request with id: {}”, request.getId());

Are you absolutely sure that request is not null here? Another pitfall is logging collections. If you fetched collection of domain objects from the database using Hibernate and carelessly log them like here:

1 log.debug(“Returning users: {}”, users);

SLF4J will call toString() only when the statement is actually printed, which is quite nice. But if it does… Out of memory error, N+1 select problem, thread starvation (logging is

synchronous!), lazy initialization exception, logs storage filled completely – each of these might occur. It is a much better idea to log, for example, only ids of domain objects (or even only size of the collection). But making a collection of ids when having a collection of objects having getId() method is unbelievably difficult and cumbersome in Java. Groovy has a great spread operator (users*.id), in Java we can emulate it using theCommons Beanutils library:

1 log.debug(“Returning user ids: {}”, collect(users, “id”));

Where collect() method can be implemented as follows:

1 public static Collection collect(Collection collection, String propertyName) {

2 return CollectionUtils.collect(collection, newBeanToPropertyValueTransformer(propertyName));

3 }

The last thing to mention is the improper implementation or usage of toString(). First, create toString() for each class that appears anywhere in logging statements, preferably using ToStringBuilder (but not its reflectivecounterpart). Secondly, watch out for arrays and non-typical collections. Arrays and some strange collections might not have toString() implemented calling toString() of each item. Use Arrays #deepToString JDK utility method. And read your logs often to spot incorrectly formatted messages.

4) Avoid side effects

 Logging statements should have no or little impact on the application’s behavior. Recently a friend of mine gave an example of a system that threw Hibernates’ LazyInitializationException only when running on some particular environment. As you’ve probably guessed from the context, some logging statement caused lazy initialized collection to be loaded when session was attached. On this environment the logging levels were increased and collection was no longer initialized. Think how long would it take you to find a bug without knowing this context? Another side effect is slowing the application down. Quick answer: if you log too much or improperly use toString() and/or string concatenation, logging has a performance side effect. How big? Well, I have seen server restarting every 15 minutes because of a thread starvation caused by excessive logging. Now this is a side effect! From my experience, few hundreds of MiB is probably the upper limit of how much you can log onto disk per hour. Of course if logging statement itself fails and causes business process to terminate due to exception, this is also a huge side effect. I have seen such a construct to avoid this:

1 try {

2 log.trace(“Id=” + request.getUser().getId() + ” accesses ” + manager.getPage().getUrl().toString())

3} catch(NullPointerException e) {}

This is a real code, but please make the world a bit better place and don’t do it, ever. 5) Be concise and descriptive Each logging statement should contain both data and description. Consider the following examples:

1 log.debug(“Message processed”);

2 log.debug(message.getJMSMessageID());


4 log.debug(“Message with id ‘{}’ processed”, message.getJMSMessageID());

Which log would you like to see while diagnosing failure in an unknown application? Believe me, all the examples above are almost equally common. Another anti-pattern:

1 if(message instanceof TextMessage)

2 //…

3 else

4 log.warn(“Unknown message type”);

Was it so hard to include thee actual message type, message id, etc. in the warning string? I know something went wrong, but what? What was the context? A third anti-pattern is the “magic-log”. Real life example: most programmers in the team knew that 3 ampersands followed by exclamation mark, followed by hash, followed by pseudorandom alphanumeric string log means “Message with XYZ id received”. Nobody bothered to change the log, simply someone hit the keyboard and chose some unique “&&&!#” string, so that it can be easily found by himself. As a consequence, the whole logs file looks like a random sequence of characters. Somebody might even consider that file to be a valid Perl program. Instead, a log file should be readable, clean and descriptive. Don’t use magic numbers, log values, numbers, ids and include their context. Show the data being processed and show its meaning. Show what the program is actually doing. Good logs can serve as a great documentation of the application code itself. Did I mention not to log passwords and any personal information? Don’t!

6) Tune your pattern
Logging pattern is a wonderful tool, that transparently adds a meaningful context to every logging statement you make. But you must consider very carefully which information to include in your pattern. For example, logging date when your logs roll every hour is pointless as the date is already included in the log file name. On the contrary, without logging the thread name you would be unable to track any process using logs when two threads work concurrently – the logs will overlap. This might be fine in single-threaded applications – that are almost dead nowadays. From my experience, the ideal logging pattern should include (of course except the logged message itself): current time (without date, milliseconds precision), logging level, name of the thread, simple logger name (not fully qualified) and the message. In Logback it is something like:

1 <appender name=”STDOUT”>

2 <encoder>

3 <pattern>%d{HH:mm:ss.SSS} %-5level [%thread][%logger{0}] %m%n</pattern>

4 </encoder>

5 </appender>

You should never include file name, class name and line number, although it’s very tempting. I have even seen empty log statements issued from the code:


because the programmer assumed that the line number will be a part of the logging pattern and he knew that “If empty logging message appears in 67th line of the file (in authenticate() method), it means that the user is authenticated”. Besides, logging class name, method name and/or line number has a serious performance impact. A somewhat more advanced feature of a logging frameworks is the concept of Mapped Diagnostic Context.MDC is simply a map managed on a thread-local basis. You can put any key-value pair in this map and since then every logging statement issued from this thread is going to have this value attached as part of the pattern.

7) Log method arguments and return values

When you find a bug during development, you typically run a debugger trying to track down the potential cause. Now imagine for a while that you can’t use a debugger. For example,

because the bug manifested itself on a customer environment few days ago and everything you have is logs. Would you be able to find anything in them? If you follow the simple rule of logging each method input and output (arguments and return values), you don’t even need a debugger any more. Of course, you must be reasonable but every method that: accesses external system (including database), blocks, waits, etc. should be considered. Simply follow this pattern:

1 public String printDocument(Document doc, Mode mode) {

2 log.debug(“Entering printDocument(doc={}, mode={})”, doc, mode);

3 String id = //Lengthy printing operation

4 log.debug(“Leaving printDocument(): {}”, id);

5 return id;

6 }

Because you are logging both the beginning and the end of method invocation, you can manually discover inefficient code and even detect possible causes of deadlocks and starvation – simply by looking after “entering” without corresponding “leaving”. If your methods have meaningful names, reading logs would be a pleasure. Also, analyzing what went wrong is much simpler, since on each step you know exactly what has been processed. You can even use a simple AOP aspect to log a wide range of methods in your code. This reduces code duplication, but be careful, since it may lead to enormous amount of huge logs. You should consider DEBUG or TRACE levels as best suited for these types of logs. And if you discover some method are called too often and logging might harm performance, simply decrease logging level for that class or remove the log completely (maybe leaving just one for the whole method invocation?) But it is always better to have too much rather than too few logging statements. Treat logging statements with the same respect as unit tests – your code should be covered with logging routines as it is with unit tests. No part of the system should stay with no logs at all. Remember, sometimes observing logs rolling by is the only way to tell whether your application is working properly or hangs forever.

8) Watch out for external systems

This is the special case of the previous tip: if you communicate with an external system, consider logging every piece of data that comes out from your application and gets in. Period. Integration is a tough job and diagnosing problems between two applications (think two different vendors, environments, technology stacks and teams) is particularly hard. Recently, for example, we’ve discovered that logging full messages contents, includingSOAP and HTTP headers in Apache CXF web services is extremely useful during integration and system testing.

This is a big overhead and if performance is an issue, you can always disable logging. But what is the point of having a fast, but broken application, that no one can fix? Be extra careful when integrating with external systems and prepare to pay that cost. If you are lucky and all your integration is handled by an ESB, then the bus is probably the best place to log every incoming request and response. See for example Mules’ log-component. Sometimes the amount of information exchanged with external systems makes it unacceptable to log everything. On the other hand during testing and for a short period of time on production (for example when something wrong is happening), we would like to have everything saved in logs and are ready to pay performance cost. This can be achieved by carefully using logging levels. Just take a look at the following idiom:

1Collection<Integer> requestIds = //…

2 if(log.isDebugEnabled())

3 log.debug(“Processing ids: {}”, requestIds);

4 else

5“Processing ids size: {}”, requestIds.size());

If this particular logger is configured to log DEBUG messages, it will print the whole requestIds collection contents. But if it is configured to print INFO messages, only the size of collection will be outputted. If you are wondering why I forgot about isInfoEnabled() condition, go back to tip #2. One thing worth mentioning is that requestIds collection should not be null in this case. Although it will be logged correctly as null if DEBUG is enabled, but big fat NullPointerException will be thrown if logger is configured to INFO. Remember my lesson about side effects in tip #4?

 9) Log exceptions properly
First of all, avoid logging exceptions, let your framework or container (whatever it is) do it for you. There is one, ekhem, exception to this rule: if you throw exceptions from some remote service (RMI, EJB remote session bean, etc.), that is capable of serializing exceptions, make sure all of them are available to the client (are part of the API). Otherwise the client will receive NoClassDefFoundError: SomeFancyException instead of the “true” error. Logging exceptions is one of the most important roles of logging at all, but many programmers tend to treat logging as a way to handle the exception. They sometimes return default value (typically null, 0 or empty string) and pretend that nothing has happened. Other times they first log the exception and then wrap it and throw it back:


log.error(“IO exception”, e);


throw new MyCustomException(e);

This construct will almost always print the same stack trace two times, because something will eventually catch MyCustomException and log its cause. Log, or wrap and throw back (which is preferable), never both, otherwise your logs will be confusing. But if we really do WANT to log the exception? For some reason (because we don’t read APIs and documentation?), about half of the logging statements I see are wrong. Quick quiz, which of the following log statements will log the NPE properly?


try {


Integer x = null;




} catch (Exception e) {


log.error(e); //A


log.error(e, e); //B


log.error(“” + e); //C


log.error(e.toString()); //D


log.error(e.getMessage()); //E


log.error(null, e); //F


log.error(“”, e); //G


log.error(“{}”, e); //H


log.error(“{}”, e.getMessage()); //I


log.error(“Error reading configuration file: ” + e); //J


log.error(“Error reading configuration file: ” + e.getMessage()); //K


log.error(“Error reading configuration file”, e); //L



Surprisingly, only G and preferably L are correct! A and B don’t even compile in SLF4J, others discard stack traces and/or print improper messages. For example, E will not print anything as NPE typically doesn’t provide any exception message and the stack trace won’t be printed as well. Remember, the first argument is always the text message, write something about the nature of the problem. Don’t include exception message, as it will be printed automatically after the log statement, preceding stack trace. But in order to do so, you must pass the exception itself as the second argument.

10) Logs easy to read, easy to parse
 There are two groups of receivers particularly interested in your application logs: human beings (you might disagree, but programmers belong to this group as well) and computers (typically shell scripts written by system administrators). Logs should be suitable for both of these groups. If someone looking from behind your back at your application logs sees

(source Wikipedia): then you probably have not followed my tips. Logs should be readable and easy to understand just like the code should. On the other hand, if your application produces half GB of logs each hour, no man and no graphical text editor will ever manage to read them entirely. This is where old-school grep, sed and awk come in handy. If it is possible, try to write logging messages in such a way, that they could be understood both by humans and computers, e.g. avoid formatting of numbers, use patterns that can be easily recognized by regular expressions, etc. If it is not possible, print the data in two formats:

1 log.debug(“Request TTL set to: {} ({})”, new Date(ttl), ttl);

2 // Request TTL set to: Wed Apr 28 20:14:12 CEST 2010 (1272478452437)

3 final String duration = DurationFormatUtils.formatDurationWords(durationMillis, true, true);

5“Importing took: {}ms ({})”, durationMillis, duration);

6 //Importing took: 123456789ms (1 day 10 hours 17 minutes 36 seconds)

Computers will appreciate “ms after 1970 epoch” time format, while people would be delighted seeing “1 day 10 hours 17 minutes 36 seconds” text. BTW take a look at DurationFormatUtils, nice tool. That’s all guys, a “logging tips extravaganza” from our JCP partner, Tomasz Nurkiewicz. Don’t forget to share!

R Vashi

Serialize a Singleton Class


As we know that Singleton class is a special kind of class which maintains a single instance throughout the application. Mainly we use singleton classes to implement control access services like Connection Pool factory, Service factory etc.

But the question arise is, Can be Serialize/Deserialize a singelton class object. And the answer is YES WE CAN, but that has to handled in the singleton class itself.

To make that class serialize we just have to implement the Serializable interface. But the main thing we have to handle is the Deserialization part of that object.

By default, the deserialization process creates new instance of classes. The below example will show how to customize the deserialization process of a singleton to avoid creating new instances of the singleton.

public class DBFactory implements Serializable {

private static DBFactory singleton = new DBFactory();

private DBFactory() {
// This method returns the singleton instance.


public static synchronized getInstnce(){
return singleton;


// This method is called immediately after an object of this class is deserialized.
protected Object readResolve() {
// instead of the object we’re on,
// return the class variable singleton

return singleton;



For Serializable and Externalizable classes, the readResolve method allows a class to replace/resolve the object read from the stream before it is returned to the caller. By implementing the readResolve method, a class can directly control the types and instances of its own instances being deserialized.

PS:  if you depend on readResolve for instance control, all instance fields with object reference types must be declared transient. Otherwise, it is possible for a ANYONE to access(ref) to the deserialized object before its readResolve method is run.

R Vashi

Deploying a EJB 2.1 Stateful Bean on Weblogic 10.3


Deploying an EJB2.1 Stateful bean is quite very easy in weblogic. First of all develop the Stateful bean and extract a jar out of it. I will show how to build and deploy the EJB in web logic 10.3 step by step.

There are five steps involved in EJB Session bean Development and deploy process.

1.       Create the Home Interface

2.       Create the Remote Interface

3.       Create the Session Bean

4.       Define the resources in DD

5.       Deploy on the server

1.       Create a Home Interface.


import java.rmi.RemoteException;

import javax.ejb.CreateException;

import javax.ejb.EJBHome;

public interface CounterHome extends EJBHome {

public Counter create(String name)throws CreateException, RemoteException;


2.       Create a Component Interface(Remote)

package com.test.stf;

import java.rmi.RemoteException;

import javax.ejb.EJBObject;

public interface Counter extends EJBObject {

public void startCounter() throws RemoteException;

public int getCounter() throws RemoteException;


3. Create Stateful session bean

package com.test.stf;


import java.rmi.RemoteException;

import javax.ejb.EJBException;

import javax.ejb.SessionBean;

import javax.ejb.SessionContext;

public class HandleCounter implements SessionBean, Serializable {

private static final long serialVersionUID = 1L;

private String userName=””;

private SessionContext ctx;

private int counter=0;

public void ejbActivate() throws EJBException, RemoteException {

System.out.println(“Inside ejbActivate”);


public void ejbPassivate() throws EJBException, RemoteException {

System.out.println(“Inside ejbPassivate”);


public void ejbRemove() throws EJBException, RemoteException {

System.out.println(“Inside ejbRemove”);


public void setSessionContext(SessionContext arg0) throws EJBException,

RemoteException {



public void startCounter() throws RemoteException{



public int getCounter()throws RemoteException{

return counter;


public void ejbCreate(String name) throws RuntimeException{




Below is the list of some high Level rules which we have ensure while developing the EJB objects.

-> Rules for EJB Home Interface:

1.       Should must extend EJBHome Interface

2.       Must have atlest one createMethod

3.       The name of create method must begin with create e.g createClientPortfolio etc.

4.       The Create method should return the Component interface type(Remote interface)

5.       Create method should throw “CreateException, RemoteException”.

-> Rules for Remote Interface

1.       Should Must extends EJBObject Interface

2.       Should contains the business methods

3.       Every business method should throw RemoteException

-> Rules for SessionBeans

1.       Every SessionBean should extend SessionBean Interface

2.       In case of Stateless session bean there must be only one create method and that should must prefix with “ejbCreate”. And should be with no-arg approach.

3.       Every bean should save the SessionContext copy in the “setSessionContext” method, as this bean called only once when bean gets created; saving this reference is always useful.

4.       Incase of Stateful session bean, do not need to have no-arg create method. And also you can create as many as createMethod.
e.g  ejbCreate(), ejbCreate(String), createCustomerPortfolio(String) etc.

4.  Now let’s prepare the DD of EJB Component

<?xml version=“1.0” encoding=“UTF-8”?>

<ejb-jar id=“ejb-jar_ID” version=“2.1” xmlns=; xmlns:xsi=; xsi:schemaLocation=;>





<description>Test Stateful on Weblogic</description>










ejb-name : Give any name for the EJB
home: EJBHome class FQDN (Fully qualified defined name)
remote: Remote class FQDN (Fully qualified defined name)
ejb-class : Session bean class FQDN (Fully qualified defined name)
session-type: Stateful or Stateless (in our case Stateful)
transaction-type: Container | Bean (let container handle this)

As we are deploying on weblogic , we need weblogic-ejb-jar.xml so that weblogic can treat this as an EJB Component and set the configuration at deploy time(JNDI name, security related etc).

<?xml version=“1.0” encoding=“UTF-8”?>


xmlns=; xmlns:j2ee=; xmlns:xsi=; xsi:schemaLocation=;>





ejb-name: make sure to use the same ejb-name defined in ejb-jar.xml
jndi-name: this is the name of the JNDI given the EJB component.

5 Deployment
To deploy simply extract the jar of the EJB classes and deploy in the admin console of weblogic.

Structure of the EJB Jar:
– ejb-jar.xml
– weblogic-ejb-jar.xml
– manifest

In case of stateless, keep everything same and change the bean create method to no-arg.

Client to test

public final static String JNDI_FACTORY = “weblogic.jndi.WLInitialContextFactory”;

public static void main(String[] args) throws Exception {

InitialContext ic = getInitialContext(“t3://localhost:7001”);

CounterHome home = (CounterHome) PortableRemoteObject.narrow(ic

.lookup(“TestStateful“), CounterHome.class);

Counter adv = home.create(“ejbMarathon“);





private static InitialContext getInitialContext(String url)

throws NamingException {

Hashtable<String, String> env = new Hashtable<String, String>();


env.put(Context.PROVIDER_URL, url);

env.put(Context.SECURITY_PRINCIPAL, “ejbuser“);


env.put(“weblogic.jndi.createIntermediateContexts”, “true”);

return new InitialContext(env);


Hope This helps.

R Vashi

Building a Basic Web service using JAX-WS

Web Service: A web service is a service which runs over XML Data exchange in the form of SOAP Request and SOAP Response.  The advantage we got over web service is the communication with different language platforms e.g. Java, .net, C++, PHP   etc.  Web services are playing a crucial part in SOA (Service Oriented Architecture) which is again a very broad term and one of the successful architecture accepted by many enterprise level applications today. Let’s end the definition part here and move now to Web Service development part. (Hope the intro have made you clear the basic concept of Web Services if not please do Google more on web services 🙂 ).

Develop a Simple Web Service

To develop a web service there are 2 types of approaches we can follow.

1.       Contract First Approach.

2.       Code First Approach

Contract First Approach: This approach works on the basis of WSDL, and require a very good knowledge of WSDL, XML, XSD.  Once you expertise the web services then you can apply some hand on this approach. For this post I will describe how to build a web service using 2nd Approach.

Code First

This approach starts with the very core level of code writing. I mean by writing a Java Class and defining all the properties of web services i.e. ports, operation, endpoints etc.

Lets Develop a Ping Web Service: This web service will simply return a greeting message with Current date and time.

First of All write a Java Class:

package com.webservice.sample;

import javax.jws.WebMethod;

import javax.jws.WebService;

import javax.jws.soap.SOAPBinding;

import java.util.Date;

@WebService(name=”PingServer”, serviceName=”PingServer”,portName=”PingServerPort”, targetNamespace=”com.webservice.pingserver”)

@SOAPBinding(style=SOAPBinding.Style.DOCUMENT,use=SOAPBinding.Use.LITERAL, parameterStyle=SOAPBinding.ParameterStyle.WRAPPED)

public class PingServer {


public String getPingStatus(){

return “Hi!! I am active :” + new Date().getTime();




@WebService :

Name: This Defines a name of web service
serviceName: This Defines a name of web service Endpoint interface

portName: This defines the name of the port

targetNamespace: Define a namespace for your web service, if no name specified, Compiler will take a default name space in the reverse order of you package name.


Style       =Defines a Soap Styel e.g SOAPBinding.Style.DOCUMENT,

Use     = Specify a SOAP Message format e.g SOAPBinding.Use.LITERAL, 

parameterStyle =defines how Web service request/reply messages are interpreted by a Web service provider/consumer. Quite simply, "wrapper" style tells the Web service provider that the root element of the message (also called "wrapper element") represents the name of the operation and it is not part of the payload. This also means that children of the root element must map directly to parameters of the operation's signature. The "non-wrapper" style (also sometimes called "bare"), does not make this assumption; in this case the entire message will be passed to the service operation. The reply message is handled in a similar way.

@WebMethod : This specify the method which is exposed to web service(operation)

in case if you are passing any arguments to the web service method.
You can use @WebParam(name=”argumentName”) annotation to comply with the schema. Otherwise generic “in0”, “in1” names will be used in WSDL for input arguments.

Generating the Web Service Artifacts

Now we need to create the artifacts of the web services. There is tool called WSGEN, This tools reads the Service Endpoint interface   and generate the WSDL and XML Schema for the web service which needs to be published.

To Run the tool first of all compile the Web service which we have created above. And open the command console and move the compiled directory e.g “/bin” dir. And run the below command.

wsgen –cp .  com.webservice.sample -wsdl

Once you run the tool you will notice the creation of WSDL and XDS creation in the “/bin” directory.

Now Let’s publish the Web Service.

In this part we will use the EndPoint class to publish the web service using a lightweight web server.

public class TestPublisher {

public static void main(String[] args) {

Endpoint.publish(“http://localhost:8011/service/pingserver&#8221;, new PingServer());


Call the publish method to publish the web service using FQN of the web service URL and the Service endpoint interface.

Endpoint.publish(URL, Object);

Once you run the Publisher. Go to Run -> Internet Explorer

And open the URL


If you are able to see the WSDL, Time to cheer!!!!! now… As the Web service has been published successfully.

[Note] Quit the Publisher Class to stop that Lightweight Web Server.

I will explain how to write a web service client to test the web service in my Next Post. Please give your suggestions or feedback.

R Vashi

JAXB. Unmarshalling strings XML’s


In this article I will show how to unmarshall XML string using JAXB, Usually JAXB uses InputStreams and OutputStreams for the XML text input and output, respectively.

One of workaround by passing a ByteArrayInputStreams ( or ByteArrayOutputStreams ( for XML String  or Using StreamSource to read the input via StringReader.

You can simply read my previous article “Sample on JAXB” to get the complete instruction on building a sample project on JAXB with eclipse plugin.

Below is the method which we can use to perform the unmarshalling of XML String into Java JAXB objects.
1. Via ByteArrayInputStream

public List loadObjectFromXMLString(String xmlString){
ByteArrayInputStream input = new ByteArrayInputStream (xmlString.getBytes());
Object jaxbObject = unmarshaller.unmarshal( input);
if(items == null) {
items = (ItemsType)(((JAXBElement)jaxbObject).getValue());
}catch(Exception e){
return null;

2 Via StreamSource

JAXBContext jc = JAXBContext.newInstance( “” );
Unmarshaller u = jc.createUnmarshaller();
StringBuffer xmlStr = new StringBuffer( “<?xml version=”1.0″?>…” );
Object o = u.unmarshal( new StreamSource( new StringReader( xmlStr.toString() ) ) );

[NOTE] This example is totally based on my previous article “Sample on JAXB”. Simply add the above method/statements if your building the project on my previous article instructions. Even there are many ways to do the same thing mentioned in the article. please go through with the below URL.

R Vashi

Sample on JAXB using Eclipse

Java Architecture for XML Binding (JAXB) allows Java developers to map Java classes to XML representations.

JAXB provides two main features:

1. The ability to marshal Java objects into XML

2. To unmarshal  XML back into Java objects.

In other words, JAXB allows storing and retrieving data in memory in any XML format, without the need to implement a specific set of XML loading and saving routines for the program’s class structure. It is similar to xsd.exe and xmlserializers in .Net Framework.

JAXB is particularly useful when the specification is complex and changing. In such a case, regularly changing the XML Schema definitions to keep them synchronised with the Java definitions can be time consuming and error prone.

Follow the steps to configure and Test JXB Sample project in Eclipse.

1. Install Eclipse plug-in for JXB2.0
or click on the link to download jaxb-xjc(rename the “docx” extension  2 “jar”)

2. Once download extract the Zip file and copy the folder “org.jvnet.jaxbw.eclipse_1.0.0” into the home directory of Eclipse > plug-in

3. Re-start the Eclipse, if the plug-in doesn’t appear then simple run the “eclipse.exe -clean” option.

4. Now create one project SampleJXB and inside the java source folder create a package like “com.mytest.jxb”

5 Add one XSD with below contents.

<?xml version=”1.0″ encoding=”utf-16″?>

<xsd:schema attributeFormDefault=”unqualified” elementFormDefault=”qualified” version=”1.0″ xmlns:xsd=””&gt;

<xsd:element name=”persons” />

<xsd:complexType name=”itemsType”>


<xsd:element maxOccurs=”unbounded” type=”itemType” />



<xsd:complexType name=”itemType”>


<xsd:element name=”firstname” />

<xsd:element name=”lastname” />

<xsd:element name=”email” />




6 Now right click on the XSD file and choose JAXB 2.0 -> Run XJC

7 You will be prompted for package name and output directory in the wizard. Simple add the package name given in step 4 and follow the rest of steps.

8 Now navigate to the java package, You will notice 3 classes got generated after running the JAXB command




9 Now write one Java class to interact with JAXB



import java.util.List;

import javax.xml.bind.JAXBContext;

import javax.xml.bind.JAXBElement;

import javax.xml.bind.JAXBException;

import javax.xml.bind.Marshaller;

import javax.xml.bind.Unmarshaller;

public class PersonListManager {

private JAXBContext jaxbContext = null;

private Unmarshaller unmarshaller = null;

private ItemsType items = null;

public PersonListManager() {

try {

jaxbContext = JAXBContext.newInstance(“com.mytest.jxb”); //MAKE SURE THE SAME PACAKGE NAME GIVEN IN STEP 4

unmarshaller = jaxbContext.createUnmarshaller();

} catch (JAXBException e) {



public List loadXML(InputStream istrm) {

try {

Object obj = unmarshaller.unmarshal(istrm);

if(items == null) {

items = (ItemsType)(((JAXBElement)obj).getValue());



} catch (JAXBException e) {



return null;



* This method will write back to the XML

* @param xmlName

* @throws Exception


public void writeDataInXML(String xmlName) throws Exception{

/* Make sure ItemsType should have @XmlRootElement(name=”items”)if missing add */

ObjectFactory factory= new ObjectFactory();

ItemsType persons = factory.createItemsType();

ItemType item = factory.createItemType();





Marshaller marshaller =jaxbContext.createMarshaller();

marshaller.marshal(library, new FileOutputStream(xmlName)) ;



10. Now write one more Java class to test the JAXB.

public class TestJAXB {

public static void main(String[] args) {

PersonListManager xmgr = new PersonListManager();

File file = new File(“NewXMLSchema.xml”);

List lst = new ArrayList();

try {

FileInputStream fis = new FileInputStream(file);

lst = xmgr.loadXML(fis);

Iterator<ItemType> lst = rtList.iterator();


ItemType item =;

System.out.println(“First Name = ” + item.getFirstname().trim() +

“\t\tLast Name = ” + item.getLastname().trim() +

“\t\tEmail = ” + item.getEmail().trim());



} catch (FileNotFoundException e) {


}catch(Exception e){





R Vashi