Monday, 29 January 2007

Spring in Action

Inversion of Control
Inversion of Control (IoC) is the reversal of responsibility with regard to how an object obtains references to other objects. Normally, each object is responsible for obtaining its own references to its dependencies. With IoC, objects are given their dependencies at creation time by an external entity which handles all the objects in the system.

Dependency Injection

Dependency Injection is merely a more apt name for IoC, given that dependencies are injected into objects. There are 3 types of IoC:
1) Interface Dependent - Dependencies are managed by implementing special interfaces.
2) Setter Injection - Dependencies and properties are configured via the setter methods.
3) Constructor Injection - Dependencies and properties are configured via the constructor.

IoC allow the Java objects to be loosely coupled, interacting through interfaces. It allows the programmer to set up and configure the objects as desired, while leaving little trace of it in the code itself.

Application design for Spring would be based on interfaces. Overall, the code would be normal POJO, until arriving at the Spring setup; a class using all the objects coded and a Spring configuration file, usually XML.

Thursday, 18 January 2007

Native Hibernate vs. Hibernate JPA

Native Hibernate uses only the Hibernate Core for all its functions. The code for a class that will be saved to the database is displayed below:
package hello;
public class Message {
private Long id;
private String text;
private Message nextMessage;
// Constructors, getters, setters...
}
As can be seen, it is merely a Plain Old Java Object (POJO). The relational mapping that links the object to the database table is in an XML mapping document. The actual code that will create and save the object is below:
package hello;
import java.util.*;
import org.hibernate.*;
import persistence.*;
public class HelloWorld {
public static void main(String[] args) {
// First unit of work
Session session = HibernateUtil.getSessionFactory().openSession();
Transaction tx = session.beginTransaction();
Message message = new Message("Hello World");
Long msgId = (Long) session.save(message);
tx.commit();
session.close();
// Shutting down the application
HibernateUtil.shutdown();
}
}
Session, Transaction and Query (not shown) objects are available due to the org.hibernate import. They allow a higher-level handling of database tasks than DAO using JDBC.

Hibernate JPA is accessed through the use of Hibernate EntityManager and Hibernate Annotations. The Hibernate EntityManager is merely a wrapper around Hibernate Core, providing and supporting JPA functionality. Thus the change in the code can be seen below:
package hello;
import javax.persistence.*;
@Entity
@Table(name = "MESSAGES")
public class Message {
@Id @GeneratedValue
@Column(name = "MESSAGE_ID")
private Long id;
@Column(name = "MESSAGE_TEXT")
private String text;
@ManyToOne(cascade = CascadeType.ALL)
@JoinColumn(name = "NEXT_MESSAGE_ID")
private Message nextMessage;
The XML document with all the relational data has been removed and replaced with inline annotations, which is provided by the javax.persistence import. The only difference between the Hibernate POJO and JPA POJO is the annotations. The code itself will run fine, the annotations make it a persistent entity but will do nothing unless Hibernate goes through it. JPA can glean enough information from them for the ORM and persistence tasks. The HelloWorld code:
package hello;
import java.util.*;
import javax.persistence.*;
public class HelloWorld {
public static void main(String[] args) {
// Start EntityManagerFactory
EntityManagerFactory emf =
Persistence.createEntityManagerFactory("helloworld");
// First unit of work
EntityManager em = emf.createEntityManager();
EntityTransaction tx = em.getTransaction();
tx.begin();
Message message = new Message("Hello World");
em.persist(message);
tx.commit();
em.close();
// Shutting down the application
emf.close();
}
}
The Hibernate import is gone, replace by javax.persistence. The EntityManagerFactory, EntityManager and EntityTransaction run the database tasks.

Both API seem similar and choosing one over the other is a matter of preference. Native Hibernate is the cleaner one, with the relational data put into an XML document. Hibernate JPA is standardised with Java and can be ported easily.

Other JPA implementations:
Open-source:
GlassFish
Apache OpenJPA

Commercial:
SAP NetWeaver
Oracle TopLink
BEA Kodo

HIbernate

Hibernate is an open-source project that handles the role of the persistence layer, becoming the middleman between the business logic code and the database data. Its expressed purpose is to free developers from the tedious and common coding of database tasks such as queries, insertions and deletions.

Take the previous DAOExercise as an example, where most of the code dealt with inserting, selecting and deleting data from the MySQL database. Hibernate would handle these mundane tasks and allow the developer to focus more on the business logic and rare SQL code exceptions.

HIbernate incorporates ORM (object/relational mapping), which will map objects to their proper tables in the database. Such logic would be used when hand-coding DAOs using raw JDBC, and it would be up to the developer to track and maintain any changes in either object or database table. Use of Hibernate simplifies matters and make maintenance easier.

Wednesday, 17 January 2007

Object Pool Pattern

The Object Pool pattern dictates having an object (usually a singleton) maintain a pool of reusable objects that can be checked out and in by clients who will use them. Connections to databases are the perfect candidate for this pattern.

The ConnectionFactory (CF) in the DAOExercise can be the connection-pool manager with the clients being the DAO classes. A DAO class request a connection from the CF for a query. The CF checks its pool; if there is no objects in the pool, it creates one and sends it to the DAO class, else it will pop out the object and return it. Once the DAO class is done with the query, it sends the connection back to CF and the CF puts it into the pool. The pool may have a maximum number of objects, whereby if all objects have been checked out and the CF tries to create one, it cannot and must wait for the return of an object to honour the request by the DAO class.

The Object Pool pattern benefit designs with a resource creation that is expensive, limited in number or slow that must be shared out to clients utilizing that resource. Real-world examples of the design are car rentals and timesharing.

Factory Method Pattern vs. Abstract Factory Pattern

The Factory Method pattern is to define an interface for creating an object but let subclasses decide the class to instantiate. For example the UIBuilder class contains two method stubs. The subclasses (EnglishUIBuilder, MalayUIBuilder) have to implement these stubs in their own way.

The Abstract Factory pattern is to provide an interface for creating families of related or dependent objects without specifying their concrete classes. This pattern is often used in conjunction with the Factory Method pattern, thus it can be seen as a factory of factory objects.

The UIFactory static method returns a subclass of UIFactory which in turn creates the appropriate UIBuilder object. The UIFactory subclass returned is determined by reading a config file and using the default value.

Tuesday, 16 January 2007

MySQL Class

Prior to the addition of the ConnectionFactory, connection to the database was handled by the MySQL class. The class was to return a connection to any caller via a getter method.

The connection was made static in the class, letting only one instance to be created. All the methods were static, thus the class need not be instantiated. With that, a static initializer was added to run once the class was loaded, establishing the connection.

For testing purposes, one connection is enough. In a live environment, the one connection that MySQL has would be swamped with requests. A pool of connections would be best for that sort. So at the time, the choice for a single connection was appropriate however it would not be scaleable outside test conditions.

The static initializer was to initialize and establish the connection. This was done when the class is loaded, in other words when a static method is called. That will be the getter method being called. Initialization can take place inside the getter or another method, however since theus connection is static and only needs to be established once, the static initializer is used.

When one has an open connection, one should provide a way of closing it. That was the closeConnection method. Ideally, once the connection has been used by a class, the class should close it. Going through DAOExercise; discovered that once closed, a connection cannot be reopened. Thereafter the only use for it was when all the tests were done in the AllTests class.

In retrospect, using the static initializer was not a good idea. It would be better to stick it in the getter method:

public static Connection getConnection()
{
if(conn.isClosed() || conn == null)
{establishConnection();}
return conn ;
}

This way, closeConnection can be used.

Addendum: Static reference creates only one instance for the class; all class objects share that static reference. As MySQL is never instantiated, that point is moot. Not so for the DAOExercise objects, two EmployeeDAOImpl objects will share the static connection.

Monday, 15 January 2007

Abstract Factory Pattern for DAOExercise

The Abstract Factory Pattern allows for multiple factories which share a common theme to be streamlined into one class. In DAOExercise, this class is the ConnectionFactory which will create the appropriate factory for use. This class currently has a concrete implementation for MySQLConnectionFactory and can be further extended (OracleConnectionFactory, OCBCConnectionFactory, etc).

This implementation allows the underlying database (MySQL) to be divorced from the actual DAOExercise (Employee, Address, Dependent) so any database change (e.g Switch from MySQL to Oracle) can be coded and inserted into the ConnectionFactory, touching very little of the code for DAOExercise.

Another way of looking at ConnectionFactory is that it is a factory of factories. The ConnectionFactory determines which factory is to be handed over to the client code via a string which must be set by the client code. The MySQLConnectionFactory will create and return a connection to the MySQL database. This outlook can be confusing when all the client code sees is the ConnectionFactory reference and not the actual factory object.

Using this pattern for DAO creation (a DAOFactory generating EmployeeDAOFactory, AddressDAOFactory and DependentDAOFactory) is impossible with the current design as the implementing classes are not related to each other. The pattern can get around this with the use of the Adapter pattern but that would still require a major rewrite of the design. At best DAO creation is served by a single factory (DAOFactory) which will generate all three classes. Also unlike the ConnectionFactory, where the objects need different initializing data, the DAOFactory objects are self-contained.

As the purpose of a factory is to generate objects for use, a single instance of it would suffice. Coding it so that it complies with the Singleton pattern would enforce this single instance. However, the pattern is not a requirement. Having multiple factories would not be a problem, save for efficiency and design.

In MySQLConnectionFactory, there is a Properties variable which is used to read a file containing all database-specific information. Aside from the connection data, it stores all the SQL queries for that database. These queries are used in the DAO classes. It is not possible to create DAO objects with the file.

Thursday, 11 January 2007

DAOExercise Architecture

Having implemented and written DAO code that accesses a MySQL
database, there need to be changes if that code is to access an
Oracle database. Though both databases use SQL, code that works
for one may be broken or have unexpected results in the other.
So the SQL code needs to be tested and rewritten as necessary.
The driver class needs to be rewritten with the proper commands
and authentication so as to get the right connection to the Oracle
database.

Whenever the database changes, the Java code has to be
rewritten to accommodate it. This is due to the hardcoding
of the SQL and driver information. The only way to not rewrite
all that is to throw it to a go-between which interacts with
the database and leaves the Java code handling objects only.

Tuesday, 9 January 2007

SQL Injection

SQL Prepared Statements are apparently not subject to injection attacks. The precompiled code will view the wildcard parameters as data only. Attempts to subvert the code proved futile, with no change to the database. Proper arguments work and the code executes.

Monday, 8 January 2007

JDBC Basics

Learning how to use SQL via the Eclipse Java IDE and MySQL. Had problems connecting to the MySQL databases until I remembered to start up the service (>_<).

The most crucial part was getting the connection through the DriverManager class and the settings for it. After that was the creation of tables and filling it with data. The tutorial at http://java.sun.com/docs/books/tutorial/jdbc/basics/tables.html was vague on that part and I had to go look elsewhere to do it.

Used SELECT to print out the table values with the help of the ResultSet and Statement classes. Then looked into updating the data via Java methods instead of normal SQL commands.

Prepared Statements are Statements given an SQL command at creation time. With wildcard parameters in the command, one can use it repeatedly, changing the parameters at will. Looked at the joining of two tables.

The last was transactions, how to commit several statements as an atomic action. The Savepoint methods allowed part of the transaction to survive a rollback.

One thing to keep in mind when building strings SQL commands is the spacing.

SCJP Exam

4th January 2007, the day of the SCJP exam, scheduled for 10:00AM. Arrived at the testing centre, was told to leave all my things in a locker. That included the pens I brought, prompting the question - will I be provided pen and paper? Apparently I won't, they have marker pens and thin mousepad-sized writing-board sheets (1 pen, 2 sheets and a duster per tester). That was a let-down. Though it's understandable why they do so, I would be more comfortable with the old pen-and-paper standard. Using a marker pen to record one's answer is somewhat irritating.

The questions were much simpler than expected, after the torture of going through the twisted self-test questions and mock exams. Even knowing that practice questions were harder than the real ones, I was still startled. 2 hours and 55 minutes is enough time to make an initial leisurely pass answering the 72 questions, bypassing over those with long convoluted code - sure to take some time to understand, a 2nd pass to wrap up unanswered questions and a quick 3rd pass to go through all the questions, minus the drag-and-drop ones. Spending time to record the answers for the drag-and-drop questions is time that could have been better spent elsewhere. Reanswering those drag-and-drop questions is definitely a pain, since the answers are cleared when you want a second look at the questions.

Overall, I think the preparation done for the SCJP exam was sufficient, as most of the questions I answered with confidence. Only with one question did I have really some doubt about what would happen (Oh, for a compiler at that time). So though I entered the exam with trepidation, I ended it with a very confident outlook. Of course, getting 13 questions wrong knocked me down with a good dose of humility, though a pass of 81% is not bad.

Looking over the breakdown of the score, I did pretty good in Declarations, Initialization and Scoping, Collections/Generics, Fundamentals and got perfect marks for Flow Control. The areas where I was lacking were API Contents, Concurrency and OO Concepts. I was surprised about Concurrency, admittedly it is a complex topic but I did not struggle with any questions regarding it except for one which looked like a deadlock situation. API Contents and OO Concepts were no surprise to me, the mocks that did breakdown listed them as problem areas. However I elected to focus on Generics and Collections, feeling I had a shaky understanding of them, and it paid off.

What I got from the entire affair? Aside from the SCJP certification, which I am not sure would be that useful, the one-month-plus training established a solid grounding in the standard Java language. It exposed me to the new features of Java 1.4 and 1.5. I'm not happy with Generics, thinking about it makes me feel like it is an abstract topic (in the literal, not Java, sense). I can use it for collections type-safety, though thinking about infesting wildcards, super and extends into classes and methods drives me off the deep end.

=^.^= At last I can bind the SCJP book with chains, weight it down with rocks and dump it in the deepest, murkiest river I can find, all the while dancing and cackling madly. =^.^=