Monday, May 14, 2012

Jogger design decisions FAQ

Jogger is a lightweight library for building Web Applications that provides a routing mechanism and a view template engine based on FreeMarker. The source and documentation are on Github.

In this post, however, I would like to answer some questions about design decisions I made with Jogger that will help you understand the philosophy behind.

Why do we need another Java Web Framework?

I wanted a super simple Java web framework that would allow me to:
  • Map HTTP requests to Java methods: similar to Play! Framework (which is inspired by Ruby on Rails).
  • Use a better request/response API than the one provided by the Servlet API.
  • Define views and layouts: reuse layouts in multiple views.
  • Plug into any project: let me choose the project structure and the libraries/frameworks I want to use; don’t make this hard on me.
  • Deploy into any Servlet Container: such as Jetty or Tomcat.
  • Easy testing: test controllers and views without having to start a Servlet Container.

Why not a component based framework like JSF, Wicket, etc.?

Short answer is that they couldn't keep the promise of isolating the user from client side development and the request/response nature of HTTP. Long answer is here.

Why not using the Servlet API directly?

Because it sucks! It's difficult to test, it has an endless hierarchy of classes, exposes hundreds of useless methods and provides an inconvenient configuration mechanism. The only good part is that it doesn't try to hide the request/response nature of HTTP.

Why using FreeMarker for the view?

JSP's (Java Server Pages) were dismissed from the beginning, you just can't create and reuse a layout with them. My first options were Jade and Haml but there is not a decent implementation for Java that I know. The truth is that I am happy with FreeMarker now.

Why not using annotations instead of the routes.config file?

First, with annotations you would have all this routing configuration scattered in multiple Java files. I wanted to keep this information centralized and as simple as I could. Second, I'm against of scanning the classpath (one of the reasons I moved away from JEE), it just doesn't scale. Third, it would be a problem for hot deployment.

Why exposing the Jogger Request and Response in the controller actions?

Because I think we should embrace the fact that HTTP is a request/response protocol. You retrieve things from the request and use the response to, well, respond.

Why testing without starting a Servlet Container?

First, it's an unnecessary overhead. Besides, you wan't to test that your routing is working as expected, that your controller is rendering the correct template, etc. and I think that mocking the Servlet Container works well in this cases. It really depends on how you need to test your application.


If you have another question or think that an answer is not clear enough, ping me and I will add it or improve it. Thanks.

Saturday, May 12, 2012

Java web development: back to basics

The first post that I wrote on this blog is titled AJAX: JQuery, Google Web Toolkit or RichFaces. There, I praise RichFaces for being built on top of JSF (Java Server Faces):

"If you haven't work with JSF, I really suggest you do as it is one of the most powerful, easy to learn Web technologies, very similar to swing with heavy use of components and events. It also has a really good support for managing the state in the server using plain Java objects (POJO’s)"

What has happened since? Well, after working with Wicket and Vaadin (which I still use in one of my open source projects), I have drifted away from component based web development.

The reason is simple. It doesn't have to do with performance, number of components or the size of the community. It's just that component based web frameworks couldn't keep the promise of isolating the programmer from client side development and the request/response nature of HTTP.

A leaky abstraction

Yes, I know. Every abstraction, to some degree, is leaky. However, the problem with this specific abstraction is that it leaks way too fast. You end up investing large amounts of time understanding how the framework works underneath and how to tweak it in order to achieve what you want. It's a real pain in the ass. The productivity that they promise in their "build an app in 10 minutes or less videos" is dead in the eleventh minute or so.

The other problem is that the technology underneath is moving too fast. HTML5 introduced a lot of new features such as web sockets, canvas and new data storage options. Web development is also moving towards stateless REST services and heavy clients. The future is not clear yet, though.

Back to basics

Lately, I have been working with Ruby on Rails and Express.js (Node.js). I think that their success lies in the fact that they embraced the request/response nature of HTTP. However, I don't buy the "Ruby made programming fun again" bullshit - a topic for another post -. I still like Java.

I also tried Play! Framework for a few weeks. It's an interesting option but I don't think that the solution is to copy exactly how Ruby on Rails works in Java/Scala. It's not really about productivity, it's about maintainability.

So, I'm actually using a super simple Java request/response based framework I made, HTML5, CSS3, Javascript and JQuery.


Sunday, September 26, 2010

Extension points (plug-in) design

In a plug-in architecture, extension points allow other users to extend the functionality of your application, usually by implementing interfaces or extending abstract classes (known as the Service Provider Interface or SPI). Designing a good SPI is as important as designing a good API; it should be easy to learn and use, powerful enough to support multiple scenarios and should evolve without breaking existing implementations.

Starting with a simple interface is simple, the problem is keeping it like that as new requirements arrive. For example, take a look at the following MessageSender interface that allow other users to implement different ways of sending messages (e.g. SMS, Email, Twitter, etc.): 

public interface MessageSender {

  void send(Message message) throws Exception;

}

Nice interface; it’s simple and easy to implement. However, suppose that implementations could have an optional name, an optional description and methods to setup and destroy. After adding the required methods to the MessageSender interface, it ends up like this:

public interface MessageSender {

  // clean up resources
  void destroy() throws Exception;

  // return null or "" for empty description
  void getDescription() throws Exception;

  // return null or "" for empty name
  void getName() throws Exception;

  void send(Message message) throws Exception;

  // setup resources
  void setup() throws Exception;

}

Besides breaking existing implementations, this changes will make the interface much harder to implement. So, how can we keep the MessageSender interface as simple as it was before and support the new requirements? Let’s take a look at two different approaches that will keep our SPI simple and extensible: creating optional interfaces and using annotations.

Optional interfaces

Instead of having all those methods in the MessageSender interface, we are going to split them up in three interfaces: MessageSender, Nameable and Configurable.

public interface MessageSender {

  void send(Message message) throws Exception;

}
public interface Nameable {

  String getName();
  String getDescription();

}
public interface Configurable {

  void setup() throws Exception;
  void destroy() throws Exception;

}

Now, we’ve cleaned up the MessageSender interface. Users can implement Nameable and/or Configurable if they need to (they are optional). You’ll have to check at runtime if the optional interfaces are implemented and call the corresponding methods where applicable. For example, the following helper method will call the setup method if the MessageSender implementation implements Configurable:

...
  public static void setup(MessageSender ms) throws Exception {
  
    // check if ms implements Configurable
    if (Configurable.isInstance(ms)) {

      // cast to Configurable and call setup method
      Configurable configurable = (Configurable) ms;
      configurable.setup();
    }
  }
...

Annotations

Another option, besides splitting into multiple interfaces, is to create annotations; for our example, we will need four: @Name, @Description, @SetUp and @Destroy. Implementations can place these annotations as needed. For example, a MessageSender implementation (e.g. SmsMessageSender) with a name “SMS Message Sender” that needs to be destroyed at the end will look like this:

@Name(“SMS Message Sender”)
public void SmsMessageSender implements MessageSender {

  public void sendMessage(Message message) throws Exception {
    // send the message (connect if not connected)
  }

  @Destroy
  public void destroy() throws Exception {
    // disconnect from the SMSC
  }
}

As you can see, we have placed the @Name annotation above the class declaration and the @Destroy annotation in the method that will release the resources. Again, we are going to need some helper methods to check if the implementation has the annotations and call the corresponding methods. For example, the following code will check if the @Destroy method exists and will call it accordingly:

...
  public static void destroy(MessageSender ms) throws Exception {
		
    Method destroyMethod = locateDestroyMethod(ms);
    if (destroyMethod != null) {
      destroyMethod.invoke(ms);
    }
  }
	
  private static Method locateDestroyMethod(MessageSender ms) {
    Method[] methods = ms.getClass().getMethods();
    for (Method method : methods) {
      // check if the @Destroy annotation is present
      if (method.isAnnotationPresent(Destroy.class)) {
        return method;
      }
    }
		
    // no method has the @Destroy annotation
    return null;
  }
...

Annotations gives us more flexibility than optional interfaces as users will only have to add the required annotations. However, both options will make your interfaces simpler, so it’s up to you.

Conclusion

Regardless of the approach you choose, keeping the interfaces (extension points) as clean as possible will allow you to:

  • Lower the learning curve of the interfaces. Users can now focus on the core methods that need to be implemented in order to add new functionality.
  • Make your documentation simpler, especially for the 101 examples. Then, you can create more advanced examples that include more complex scenarios.
  • Evolve the interfaces without breaking existing implementations.

Monday, September 20, 2010

Mokai: Architecture of a Messaging Gateway

Software architecture is all about simplicity. In this post, I’ll show you the process of building the architecture of Mokai, an open source Java Messaging Gateway capable of receiving and routing messages through a variety of mechanisms and protocols including HTTP, Mail, SMPP and JMS (Java Message Service), among others.

The black box

Applications send and receive messages to and from external services such as an SMSC, a SMTP server, a JMS queue, etc. In the picture, we can identify the first three challenges of the gateway:

  • Receiving messages from applications using any protocol.
  • Sending messages to external services using any protocol.
  • Routing messages internally.

Normalizing messages

Let’s start with the last point. How do we route messages internally? Well, with so many protocols for receiving and sending messages, we’ll need a Message class we can manipulate inside Mokai. It will contain some basic information, a body and a map of properties. This way, we can encapsulate any type of information we want inside the message (in the body or properties).

When Mokai receives a message, it will have to be normalized (i.e. encapsulated in a Message object) so it can be routed through the gateway. Once we are ready to send the message to an external service, it will have to be de-normalized (i.e. converted to a protocol that the external service can understand). Easy, right?

Receivers and Processors

The other two challenges we have described above involve extensibility. Anyone should be able to create new protocols for receiving messages or routing them outside the gateway. When creating extension points for your application, make them as simple as you can. In this case, we will introduce two interfaces: Receiver and Processor. Users will have to implement these interfaces to introduce new protocols for receiving or sending messages.

 

The Processor interface is really straightforward. Nothing strange in there. It just exposes two methods; one for checking if the processor supports a Message and other to process it (de-normalizing it to the desire protocol).

The Receiver interface is even simpler; it exposes no methods, but it uses an instance of a MessageProducer to route the received messages inside Mokai. The MessageProducer will be injected automatically to the Receiver at runtime.

Routing messages

When Mokai receives a message from an application, it will have to decide which processor will handle it. To achieve this, each processor will have a collection of acceptors attached. Acceptors are an extension point of the application, so we introduce the Acceptor interface.

The Acceptor interface exposes only one method that returns true if the acceptor accepts the message or false if it rejects it. When a message is received, a router will query each group of acceptors, based on a priority, to select the processor that will process the message.

So far, our architecture looks like this:

We have introduced a new element in the diagram: a router; it’s task is to query all acceptors and processors to see who will handle the message.

Executing actions on messages

It is possible that we might want to execute actions on a message such as validate it, transform it or re-route it to a different location. For this cases, we introduce a new extension point, the Action interface.

The Action interface exposes only one method to execute the action on the message. There are three places where we can execute actions on a message:

  • After a message is received (post-receiving actions).
  • Before the message is processed (pre-processing actions).
  • After the message is processed (post-processing actions).

Now, with the introduction of actions, the complete architecture of Mokai looks like this:

We have encapsulated the receiver in a Receiver Service and the processor in a Processor Service. A Processor Service also has a collection of post-receiving actions not shown in the diagram (processors can act as receivers too). We have also introduced two queues to store the messages until they are processed.

The class diagram

Now that we’ve seen the architecture of Mokai, let’s see the final class diagram that models the architecture.

In this diagram we can see all the elements described in the architecture and their relationships.

Wrapping up

As you can see, Mokai’s architecture is simple, elegant and powerful. It provides all the elements we need to extend the platform to our will, keeping extension points simple and manageable.

In the first release of Mokai, there are also additional services such as message persistence, a plugin mechanism, an administration console and a configuration module, among others. You can check the project homepage for more information.

Friday, April 9, 2010

4 areas of possible confusion in JEE 6

The JEE specification is released and maintained through the Java Community Process, which defines the different API’s that will be included in each release, and how they relate to each other. Due to the technical and political factors involved in the process, besides the time constraint, is no surprise that some areas of the specification may be confusing to new developers, especially with the rising number of annotations introduced by each API.

In this post, I will try to explain four areas where developers might find themselves confused when working with JEE 6.

1. Managed Beans and EL access

The following two annotations from the JEE 6 API are both used to access a bean from the Expression Language (i.e. using the #{} notation):

  • @javax.faces.bean.ManagedBean – Defined in the JSF 2.0 Managed Bean Annotations specification (JSR-314), it provides an alternative to the declaration of managed beans in the faces-config.xml descriptor file.
  • @javax.inject.Named – Defined in the Dependency Injection (JSR-330) specification, is one of the built-in qualifier types of CDI (JSR-299) used to provide a name to a bean, making it accessible through EL.

So, which one to use? Always use the @Named annotation, unless you are working in a JSF 2.0 environment without CDI (a very unlikely scenario). There is just no reason to use JSF Managed Beans if CDI is present. You can check this article for more information.

Furthermore, there is another @ManagedBean annotation you might also get confused with:

  • @javax.annotation.ManagedBean – Defined by the Commons Annotations specification (JSR-250), it is used to declare a managed bean as specified in the Managed Beans specification (JSR-316).

Most likely, you will never need to use this annotation. Let’s look why.

First, don’t confuse Managed Beans with JSF Managed Beans, they are different things. The Managed Beans specification, a subset of the JSR-316, is an effort to provide services like dependency injection, lifecycle callbacks and interceptors to POJO’s (Plain Old Java Objects), which is really cool! So, why not to use it?

CDI treats all POJO’s as Managed Beans! There is no need to explicitly annotate the POJO with @javax.annotation.ManagedBean. Nothing stops you from doing it though.

2. Duplicated @…Scoped annotations

The following annotations are duplicated in the JEE 6 API:

  • @javax.faces.bean.RequestScoped and @javax.enterprise.context.RequestScoped.
  • @javax.faces.bean.SessionScoped and @javax.enterprise.context.SessionScoped.
  • @javax.faces.bean.ApplicationScoped and @javax.enterprise.context.ApplicationScoped.

The @javax.faces.bean… annotations are defined in the JSF Managed Beans specification (JSR-314) and the @javax.enterprise.context… annotations are defined in the CDI specification (JSR-299). 

So, which ones to use? Always use the CDI annotations, unless you are working in a JSF 2.0 environment without CDI (a very unlikely scenario). As discussed above, there is no reason to use JSF Managed Beans if CDI is present.

One scope that is missing in CDI is the @ViewScoped of JSF. However, the problem is solved by creating a portable extension which will be included in the Seam 3 Faces Module.

3. Defining a Singleton

There are four annotations that provide very similar functionality:

  • @javax.ejb.Singleton – A new type of EJB from the JSR-318 specification, it is used to maintain a single shared instance; it’s thread safe and transactional.
  • @javax.inject.Singleton – From the Dependency Injection specification (JSR-330), marks a type the injector will only instantiate once.
  • @javax.enterprise.inject.ApplicationScoped – One of the built-in scopes from the Contexts and Dependency Injection specification (JSR-299), specifies that a bean is application scoped.
  • @javax.faces.bean.ApplicationScoped – Defined in the JSF 2.0 (JSR-314) Managed Beans specification, specifies that a JSF managed bean is application scoped.

They all guarantee a single instance of the class. But, which one to use?

We have already discussed the duplicated @ApplicationScoped annotations above, so we will just dismiss @javax.faces.bean.ApplicationScoped to move on. That leaves us with just 3 options.

@javax.ejb.Singleton is the only one that will give you out-of-the-box “enterprise” services such as concurrency and transaction management. So, if you need these features, this is the annotation to use. You can optionally add the @javax.enterprise.inject.ApplicationScoped annotation but you will not feel the difference, although, CDI will treat it differently. You can check this forum thread for more information.

Note: even though it is possible to use @javax.enterprise.inject.ApplicationScoped with a Stateful Session Bean, I can’t find any good reason to use it given that EJB 3.1 has introduced the Singleton Beans.

If you want to use POJO’s (called Managed Beans in the JEE world), instead of EJB’s, you can also annotate them with @javax.enterprise.inject.ApplicationScoped to guarantee that the class is instantiated only once. Be aware that you will lose all of the services provided by EJB’s such as concurrency and transaction management.

I would not suggest using @javax.inject.Singleton unless you are working in a JS2E environment without CDI support. You can check the Weld documentation and this forum thread for more information about how this annotation is handled in CDI.

Note: none of the annotations discussed above will guarantee a shared single instance in a clustered environment. Usually, you will have one singleton instance per JVM.

4. @Inject or @EJB

They are both used for injection. @Inject is from the Dependency Injection specification (JSR-330) and @EJB from the EJB specification (JSR-318).

So, which one to use?

In most of the cases you will use @Inject over @EJB unless you are working on an environment with EJB’s but no CDI (very unlikely since JEE 6), or if you need to define an EJB resource. You can check the this forum thread for more information.

Conclusion

  1. If you need to access a bean through EL (i.e. from a JSF page), annotate the bean with @Named.
  2. Usually, you will never need none of the @ManagedBean annotations.
  3. Always use the @…Scoped annotations from the javax.enterprise.context package (CDI).
  4. To define a singleton, use the @ApplicationScoped annotation. If you need the EJB services such as concurrency and transaction, add the @javax.ejb.Singleton annotation.
  5. Always use @Inject instead of @EJB, unless you really have a motive to do so.

Sunday, March 28, 2010

What’s so wrong with EJB’s?

It’s not that they are heavy. It’s not that they are difficult to write. So, what’s wrong with EJB’s?

EJB’s can be either Stateless, Stateful or Singleton. However, when you are working on a web environment for example, these lifecycles will not suffice. Request, session and application lifecycles would be much better suited in that case. Another example is a business process application, where you could have a task or a process lifecycle; again, EJB’s will not match those lifecycles. This is what I call the Lifecycle Mismatch.

Through the years, EJB’s have become more POJO-like, through the use of annotations and without requirements such as XML descriptors and Java interfaces. However, it doesn’t matter how simple they get, until they don’t remove the lifecycle restriction, you will end up creating additional layers in your application and different component models to adapt your required lifecycles.

Note: The lifecycles I’m referring are also known as contexts or scopes in other frameworks/specifications.

For example, take a look at JSF. They introduced the JSF Managed Beans to handle this problem. JSF Managed Beans can be bound to request, session or application contexts. From there, you are on your own on how you connect them to the EJB’s different lifecycle.

CDI (Contexts and Dependency Injection) is a specification, part of the JEE stack, that is trying to fill that gap. It provides dependency injection and lifecycle management to POJO’s and EJB’s. But providing lifecycle management to EJB’s doesn’t feels right, because they already provide their own. CDI does its best effort to accommodate this mismatch.

On the other hand, CDI could provide to POJO’s, all the services EJB’s currently have, through extensions. So, do we really need EJB’s? Well, not really! You can check my post EJB’s - Time to let them go? for more information.

Conclusion

EJB’s have become light weighted and easy to write. But the lifecycle mismatch is still there. POJO’s, on the other hand, don’t have this restriction and, with CDI, they could have all the services EJB’s currently provide.

Thursday, March 4, 2010

Understanding and Comparing Dependency Injection Frameworks

If you have used a modern Dependency Injection (DI) framework like JBoss Seam, Google Guice or CDI (Contexts and Dependency Injection), you might already be familiar with the following line of code:

@Inject MyBean myBean;

But, to really understand how a DI framework works and what makes one different from the other, we are going to build one from scratch. Let’s call it OurDI. Then, we will compare it with some of the most popular DI frameworks out there.

So, let’s start by writing a Quick Start Guide for OurDI that shows how it works.

Quick Start Guide

As any other modern DI framework, OurDI allows you to inject dependencies using annotations as in the following example:

public class UsesInjection {
  @Inject MyBean myBean;
  ... 
}

In order to configure OurDI, all you need to do is to populate a class named Injector with all the objects that can be injected at runtime; each object must be identified by a unique name. In the following example, we are going to configure the Injector with two classes MyBean and AnotherBean, bound to the names “myBean” and “anotherBean” respectively:

public class Main {
  public final static void main(String[] args) {  
    Injector injector = Injector.instance();  
    injector.bind(“myBean”, new MyBean());  
    injector.bind(“anotherBean”, new AnotherBean(); 
  }
}

So, when a method of the UsesInjection class (defined above) is called, a search will be done for the name of the attribute “myBean” on the Injector instance, and the value associated with that name will be injected.

Building the framework

Well, that was a quick Quick Start Guide! Now let's see how OurDI works underneath.

OurDI is composed of two classes and an annotation:

  • A singleton class named Injector, that will store the bound objects.
  • A class named InjectorInterceptor, which will use AOP (Aspect Oriented Programming) to inject the dependencies at runtime.
  • The @Inject annotation.

Let’s take a look at the Injector class:

public class Injector {

  private Map<String,Object> bindings = new HashMap<String,Object>();

  // static instance, private constructor and static instance() method 
 
  public void bind(String name, Object value) {
    bindings.put(name, value);
  }
 
  public Object get(String name) {
    return bindings.get(name);
  }
}

As you can see, Injector is singleton class backed by a Map<String,Object> that binds each object to a unique name. Now, let’s see how injection is done when an @Inject annotation is found.

We can use any AOP library to intercept method calls and look for all the dependencies that need to be injected on the target object. In this case, we are going to use AspectJ:

@Aspect
public class InjectorInterceptor {

  @Around("call(* *.*(..)) && !this(InjectorInterceptor)")
  public Object aroundInvoke(ProceedingJoinPoint joinPoint) throws Throwable {
  
    // this should return the target class
    Class clazz = joinPoint.getTarget().getClass();
  
    // inject fields annotated with @Inject
    for (Field field : clazz.getDeclaredFields()) {
      if (field.isAnnotationPresent(Inject.class)) {
        doInjection(joinPoint.getTarget(), field);
      }
    }
  
    return joinPoint.proceed();
  }
 
  private void doInjection(Object target, Field field) throws IllegalAccessException {
    String name = field.getName();
    field.setAccessible(true);
    field.set(target, Injector.instance().get(name));
  }
}

Before each method call, the aroundInvoke method will be called. It will look at all the fields of the target object to see if it finds any @Inject annotation. For each of the annotated fields, it uses the name of the field to find the corresponding object in the Injector class.

Finally, here is our @Inject annotation:

@Target(value={FIELD})
@Retention(value=RUNTIME)
@Documented
public @interface Inject {}

That’s all! you can check the full source here. Now, let’s check some aspects of our solution and compare them with other popular frameworks out there.

Configuration

In OurDI, we have to manually populate the Injector instance with the objects that are going to be injected. This is similar to Google Guice. However, in Guice, there is no need to use a name and, instead of binding objects, we bind classes:

Injector injector = Guice.createInjector(new Module() {
  public void configure(Binder binder) {
    binder.bind(Notifier.class).to(SendSMS.class);
    binder.bind(Database.class).to(MySqlDatabase.class);
  }
});

JBoss Seam and CDI take a different approach. Instead of binding each class manually, they scan all your classes on bootstrap to populate the “injector”. In JBoss Seam, you will need to register the Seam Servlet Listener on your web.xml; for CDI, no configuration is required (besides the beans.xml descriptor), the application server will do the scanning automatically (one of the benefits of being part of the JEE stack).

Scanning all classes and choosing the ones suitable for injection can take a while. So, Google Guice (and OurDI) will be faster starting up. However, with JBoss Seam or CDI, you won’t have to worry about binding each class/object manually.

Type safety

As you might already have noticed, OurDI will break at runtime with a ClassCastException if the injected object is not of the expected type. The same happens with JBoss Seam, which also uses a name-based approach (components are named using the @Name annotation).

On the other hand, Google Guice and CDI are type safe, meaning that the injected objects are not identified by a name, but by their type.

Overriding/changing implementations

In OurDI, to override/change the implementation of an injected class, all we need to do is change the object bound to the Injector class. This is very similar to Google Guice as shown before.

For JBoss Seam and CDI, we need to provide more information about the classes we want to use. I won’t dive into the details but you can check how this works in the Weld documentation for CDI and in this post for JBoss Seam.

Constructor and method injection

Besides field injection, some DI frameworks provide constructor and method injection to initialize your objects. OurDI doesn’t support any of these, neither JBoss Seam. Both CDI and Google Guice support them.

Injection of external resources

Sometimes, you need to inject things you don’t have control of. For example, DataSources, 3rd party libraries, JNDI resources, etc. OurDI doesn’t support this. JBoss Seam supports it with factories and manager components, Google Guice supports it with provider methods and CDI supports it with producer methods and fields.

Scopes

Scopes, also known as Contexts, are a fundamental part of development. They allow you to define different lifecycles for each object (i.e. request, session, or application). OurDI doesn’t support scopes. However, Google Guice, JBoss Seam and CDI, all support scopes in one way or another. You can read more about scopes and their importance on this post.

You can also check the documentation of scopes for each framework/specification: Google Guice, JBoss Seam and CDI.

Static vs. Dynamic Injection

In OurDI, every time a method is called, it will scan the object to find dependencies that need to be injected. This is called Dynamic Injection and is how JBoss Seam works. The advantage of dynamic injection is that if a dependent object is changed, the new object will be injected in the next method call, so, you’ll always end up with the “correct” instance.

Google Guice and CDI uses Static Injection, which means that injection will only occur once after the object is created. The problem here, as stated above, is that if a dependent object is changed, you will still have the reference to the old object. CDI solves this problem by using proxies that will always point to the correct instance. So, even though it uses static injection (the field is injected only once), it works like dynamic injection.

Lazy Loading of objects

In OurDI, you will need to instantiate all the classes that need to be injected on startup. This is definitely not a good idea as you are loading things in memory that you are not still using. Google Guice, JBoss Seam and CDI all use a lazy loading approach where the objects will be loaded only when needed, which is a good thing.

Aspect Oriented Programming

Most DI frameworks provide some type of method interceptor mechanism that simplifies AOP development, usually with annotations. This is another feature that OurDI lacks! However, Google Guice, JBoss Seam and CDI, all support this feature with a very similar approach.

Integration

Integration with other environments is really simple with OurDI and Google Guice. You just need to configure the Injector on startup and that’s it. JBoss Seam is a complete development platform that integrates multiple technologies and it would be almost impossible to integrate it on a different environment different from JEE. CDI is part of the JEE 6 stack, however, implementations like Weld can run on different environments with the appropriate hookups. 

Conclusion

Ok, let’s face it. Our own DI framework sucks!:

  • It doesn’t support constructor or method injection (or provide an alternate solution).
  • It doesn’t support injection of external resources.
  • It needs all classes to be instantiated on startup.
  • It doesn’t support scopes. 
  • It doesn’t provides a method interceptor mechanism.

However, I hope OurDI was useful enough to compare different DI frameworks. We only talked about Google Guice, JBoss Seam and CDI, but you can do the exercise with any other DI framework out there.