Friday 7 September 2007

Spring 2.1 Annotated Dependency Injection

Annotation-Driven Dependency Injection in Spring 2.1: This page shows how one can use annotations to entirely inject dependencies. This allows the role of the application context xml file to be minimised, with no bean definitions necessary. Instead the file can be set to an annotation context with scanning of the classes done in the package specified.

The downside to this is the POJOs are now infected with Spring configuration. For a large project with dozens of files, this would make changes difficult.

Thursday 6 September 2007

Wicket 1.3

Going over the Migrating to Wicket 1.3 page and looking at the API changes which affect the FilmFestival project:

Using a Filter: Change the assignment of ServletContext in the WebApplication subclasses. Change the web.xml listing.

IModel change: The FilmFestival's DetachableBlogEntryModel need to extend some other class as per the migration mapping due to AbstractReadOnlyDetachableModel having been removed. Note the method name changes.

Interesting bit on CompoundPropertyModel. The project uses CompoundPropertyModels to automatically bind data elements to their page counterparts. As the approach used is different, alteration should not be required.

Validation: The API for validation has been moved from package wicket.markup.html.form.validation to package org.apache.wicket.validation.

Repeaters: This API has been moved from extensions to the core, requiring only a Ctrl-Shift-O to relink the classes affected in the import section.

DatePicker: This component has been removed. Alternatives listed need to be used.

Custom resource loading: Would this allow the html files to be separated from their java counterparts? The instructions here does not seem to use it.

One of the new features in Wicket 1.3 is Enclosure. Which, as shown here and here, is a way of toggling markup visibility for components.

Wednesday 29 August 2007

Installing JIRA, a bug & issue tracker

Installing the WAR version
Obtain and extract the WAR file:
wget http://www.atlassian.com/software/jira/downloads/binary/atlassian-jira-enterprise-3.10.2.tar.gz
gunzip atlassian-jira-enterprise-3.10.2.tar.gz
gtar -xf atlassian-jira-enterprise-3.10.2.tar.gz

Tomcat is installed through Blastwave. The path is /opt/csw/share/tomcat5. Follow the Tomcat installation instructions. The transaction factory in the WAR's edit-webapp/WEB-INF/classes/entitymanager.xml is set to Tomcat and does not need to be changed. There are only two settings to be changed for use with PostgreSQL. Build the distribution and copy from dist-tomcat/ to the web server.

Copy and edit the jira.xml file:
cp {WAR}/dist-tomcat/tomcat5.5/jira.xml /opt/csw/share/tomcat5/conf/Catalina/localhost

The resource should now have four lines:
username="jira_user"
password="{password}"
driverClassName="org.postgresql.Driver"
url="jdbc:postgresql:jiradb"


Obtain the JDBC driver for Tomcat and move it to common/lib:
wget http://jdbc.postgresql.org/download/postgresql-8.1-410.jdbc3.jar

Extract the jars to Tomcat's common/lib/ directory:
wget http://www.atlassian.com/software/jira/docs/servers/jars/v1/jira-jars-tomcat5.zip

Start Tomcat and go do the setup wizard. LDAP authentication is not tightly coupled therefore you must create JIRA users with the same name as the LDAP users for this to work. A bulk-import of LDAP users can be done if desired.

Grinder 3, a Java load tester

The file download is available here. Once downloaded, you can extract it out to any location. It is used through the java command as explained at this site and here as well.

To get it running as soon as possible, run TCPProxy and set the browser proxy to the port 8001. The proxy will record the actions you take through the browser. Once complete, close the proxy and start the console. Create a grinder.properties from the example given and start the agent. Use the console to start the processes and stop it when you have had enough.

The results obtained are mostly time statistics. One would have to create or edit a script to obtain more specific details. Testing against a local webapp using Tomcat revealed a Java heap memory exception in the Tomcat logs. Against the same webapp on a public server, there was no errors in the Grinder logs.

Tuesday 28 August 2007

Installing XWiki

Before installing the XWiki WAR version listed here, you'll need to install a servlet container and a relational database, in this case Tomcat and PostgreSQL, following the instructions given.

After configuring XWiki (the concluding step), you will need to import the default skin for functionality. It can be downloaded at the same place as the WAR file.

The LDAP Authentication is basically straightforward. Replace the values with your own and you are set. LDAP users have to login to XWiki before they have a presence there. XWiki groups have to be assigned as the LDAP groups are not imported in.

Friday 24 August 2007

Installing Confluence

Follow the install guide here. Going through the standalone version, ensure that the Java JDK is installed. Obtain the latest Confluence archive, note: 2.5.6 has a bug that will not allow LDAP authentication to work properly. Also following trying to use the old database with LDAP authentication will open up this bug.

With Solaris, there is no need for the X11 libraries. The home directory is where Confluence data will be kept, Confluence itself is in the archive. Go to the Confluence Setup Wizard Guide. Create the admin user.

LDAP Authentication
Follow this. Note Step 3.2. Step 5 must be successful. For Step 6, refer to this and this. There is a problem with linking LDAP users with their groups, currently both are shown but are not connected. Current solution is to create a Confluence group and use that.

Tuesday 21 August 2007

LDAP Authentication with TWiki

Using LdapContrib for transparent authentication. Follow the installation instructions, first installing the required dependencies either through the script or manually. This explains more on what LdapContrib does. The LdapNgPlugin and NewUserPlugin may be desired.

Friday 17 August 2007

Installing TWiki

This post is regarding the installation steps of TWiki on a remote Solaris 11 server. This assumes the installer has root access. The main reference for this post is here - Installing TWiki 4.x on Solaris 10.

Create the following (arbitrary) directory structure in the filesystem:
/apps/twiki-root
/apps/twiki-root/bin
/apps/twiki-root/twiki - A symlink to the directory below
/apps/twiki-root/twiki-4.1.2
/apps/twiki-root/apachemodules
/apps/twiki-root/perlmodules


Download and unpack the latest TWiki version. Create the LocalLib.cfg file in /apps/twiki-root/twiki/bin. Modify it to set the twikiLibPath and the path for TWiki related Perl modules:
$twikiLibPath = "/apps/twiki-root/twiki/lib";

@localPerlLibPath = ('/apps/twiki-root/perlmodules', '/apps/twiki-root/perlmodules/i86pc-solaris-64int');


Building CPAN Perl modules on Solaris 10


Go over to CPAN and download the CGI::Session and Digest::SHA1 modules. Unpack them into temporary directories and use these commands:
$ /usr/perl5/bin/perlgcc Makefile.PL [LIB=/apps/twiki-root/perlmodules]
$ make
$ make test
$ make install


Blastwave for GNU grep, diff and rcs


Follow Blastwave installation instructions to enable pkg-get and install grep, diff and rcs. They will be available in /opt/csw/bin. Create symlinks for egrep(->ggrep), fgrep(->ggrep) and diff(->gdiff).

Apache


Create the /etc/apache2/httpd.conf from the example in that directory. Use the ApacheConfigGenerator to configure a twiki.conf file. Settings used for the generator:
Enter the full file path to your twiki root directory (mandatory):
/apps/twiki-root/twiki

Enter the IP address range or hostnames that will have access to configure - separate with spaces
localhost

Enter the list of user names that are allowed to view configure
Empty

Enable mod_perl
Unchecked

Choose your Login Manager:
None - No login

Prevent execution of attached files as PHP scripts if PHP is installed:
No PHP Installed

Block direct access to viewing attachments that ends with .htm or .html
Unchecked

Block direct access to viewing attachments in Trash web

Unchecked


Append the file to httpd.conf or otherwise have Apache load it. Restart the Apache Web Server:
svcadm disable apache2
svcadm enable apache2

Check /var/svc/log/network- http:apache2.log to check whether the server is up or it failed to start. Troubleshoot as required.

Browser configuration


Go to http://hostname/twiki/bin/configure to continue configuring TWiki. Use /opt/csw/bin to add to the path. Also use it or create a symlink for rcs. Complete the setup.

Done. More details as forthcoming.

Friday 10 August 2007

LDAP Authentication with SugarCRM

Both SugarCRM and LDAP are installed. LDAP is to be used to authenticate users in SugarCRM.

Login to SugarCRM as admin. Click on the 'Admin' button at the top of the page and then 'System Settings' in the main page.

Set 'Enable LDAP'. The default LDAP server and port number are localhost and 389 respectively. Change as required. The base dn is the location where the search for users begin, in this case dc=nodomain.

For the bind attribute, disregarding the example text, it should be dn. More information here. The login attribute is the username to be used. Any attribute can be used (e.g uid, sn, cn). An important thing to note is with 'Auto Create Users' set, a change in login attribute will create new users with that attribute as the username. If 'Auto Create Users' is not set, authentication will fail as the user may be present in LDAP but not in the SugarCRM database due to the username.

Authenticated user and password is the LDAP account to be used for searching. The 'Auto Create Users', as mentioned before, creates new users from their LDAP information if they are not present in SugarCRM. The encryption key may be left empty.

Tuesday 7 August 2007

svnsync: A Subversion Mirror

The steps to mirror a subversion repository are detailed here

My take on those steps:
$ svnadmin create /var/svn/backup
$ echo '#!/bin/sh' > /var/svn/backup/hooks/pre-revprop-change
$ chmod +x /var/svn/backup/hooks/pre-revprop-change
$ svnsync init file:///var/svn/backup svn+ssh://[username]@svn.sixpaq.com/home/[username]/svn/projects
$ svnsync sync file:///var/svn/backup
Other places of interest are here, here and here

Wednesday 18 July 2007

Python strings (vs. Java strings)

Python strings are more or less similar to Java strings. A difference is the formatting of Python strings are done in the strings themselves. Also Python strings have more methods than their Java counterparts.

Methods:
The Python methods, find (the same as index) and rfind (rindex) have their Java opposite, indexOf and lastIndexOf. So does lower and upper in the Java methods toLowerCase and toUpperCase. split, replace, startsWith and endsWith are common to Python and Java.

join, the opposite of split, does not have a Java equivalent. The join method concatenates the members of a string sequence together with a specified divider. Neither do these methods: count, isalpha, isalnum, isdigit, islower, capitalize, isupper, title, istitle, swapcase, expandtabs and translate.

Formatting:
The string formatting operator, the percent (%) sign, does all the work as an example from the Beginning Python: From Novice to Professional book shows:
>>> format = "Hello, %s. %s enough for ya?"
>>> values = ('world', 'Hot')
>>> print format % values
Hello, world. Hot enough for ya?
Templates strings are another way of formatting. Basically these are strings with variables that are substituted with the actual value. Below are two examples from the book:
>>> from string import Template
>>> s = Template('$x, glorious $x!')
>>> s.substitute(x='slurm')
'slurm, glorious slurm!'
>>> s = Template("It's ${x}tastic!")
>>> s.substitute(x='slurm')
"It's slurmtastic!"
A third example uses a dictionary for substitution with value-name pairs:
>>> s = Template('A $thing must never $action.')
>>> d = {}
>>> d['thing'] = 'gentleman'
>>> d['action'] = 'show his socks'
>>> s.substitute(d)
'A gentleman must never show his socks.'

Wednesday 4 July 2007

Creating a Drupal module

First, went to the module developer's guide page and did the tutorial for Drupal 5.x. The block module in the tutorial, when completed, is full of code that integrates the module into the Drupal CMS. We don't need much of that code for now.

We want a quick and dirty email notifier module. First we create the info file that contains information which the CMS will display. The module's name is Messenger so the file will be messenger.info:
; $Id$
name = Messenger
description = "Sends out an email whenever new content has been created."

Then we have the module file, messenger.module. We add the hook_help and hook_perm functions from the tutorial. Through the Hooks API, we use the hook_nodeapi function which will run itself whenever there is node activity. This function is where we put in the mail command:
function messenger_nodeapi(&$node, $op, $a3 = NULL, $a4 = NULL)
{
switch($op)
{
case 'insert':
$query = "SELECT mail FROM users ".
"WHERE name = 'admin'";
$queryResult = db_query($query);

$user = db_fetch_object($queryResult);

$to = $user->mail;
$subject = "New content available!";
$body = "Someone's been adding new content! To ensure it's nothing illegal, please go check it out.";
$from = "drupalcares@whocares.com";
$headers = array();

$mail_success = drupal_mail('messenger', $to, $subject, $body, $from);
if(!$mail_success)
{
drupal_set_message(t('There has been an error.\\nTo: '.$to.'\/nSubject: '.$subject.'\rBody: '.$body.'\nFrom: '.$from.'\nHeaders: '.$headers));
}
else
{
drupal_set_message(t('A mail has been sent to the admin. Prepare yourself! :)'));
}
drupal_set_message(t('Yay, you just inserted something! :)'));
break;
}
}

One thing to check is whether sendmail is available on the Ubuntu system. If sendmail is not working then the module will not work.

The module is functional now. Further improvements can be done; sending the mail to more than one admin or even to users, restricting the notification to just pages or blog entries and making it configurable through the CMS.

Tuesday 26 June 2007

Drupal Modules

Drupal requires a few modules before it can be truly functioning to one's expectation. The three sites listed below are more than enough for the beginner. As it is, some of the modules are only relevant if the website to be built is of a particular type - community, commercial, blog, etc. That said, the one module that must be installed would be the WYSIWYG editor. The default Drupal content creation way is to type in the page wholesale with all the markup which is a pain!

Modules:
Top 10 Drupal modules
10 Drupal Modules You Can't Live Without
Drupal Podcast No. 40: Top 40 Projects

WYSIWYG Editor:
TinyMCE WYSIWYG Editor
widgEditor - A WYSIWYG editor
FCKeditor

Friday 22 June 2007

PHP HTTP Authentication using database with PEAR

With Ubuntu 7.04 (Feisty Fawn) and a Synaptic-installation of php5, you'll have to install the Auth_HTTP package as in the book and then the DB package.

To obtain data read from the database, must use the getAuthData("column_name") method on the Auth_HTTP object.

Thursday 21 June 2007

PHP HTTP Authentication

The HTTP Authentication for PHP comes in two versions, “Basic” and “Digest”.
  • Basic sends the username and password across the net in plain text.
  • Digest uses some clever one-way encryption techniques (eg. MD5 or SHA1) to prove that both client and server are exactly who they say they are without actually sending the password at all (in fact the server doesn't even need to store the password itself).

A problem with HTTP Authentication is that there is no way to easily logout, in fact there is no support for logging out. The browser can access the restricted material until such time that the cached credentials are flushed. This usually happens when the browser is closed.

Friday 15 June 2007

Beginning PHP and MySQL 5 - Chapters 1-7

Ch 1 - An Introduction to PHP
Basically the history of PHP and why you should use it.

Ch 2 - Installing and Configuring Apache and PHP
Useful in getting started up. There are a lot of configuration options listed out. Using default values so skipped over it for now.

Ch 3 - PHP Basics
Learned the syntax that are available, quite a few including a short form version. Went through comments, outputting to html, datatypes, variables and constants, expessions and control structures like if, if-else and switch.

Ch 4 - Functions
All about functions, the prototype for a function is similar to a method:
function function_name(parameters)
{
function-body
}
There is no return type required. If needed, it can be added to the body.

Variable functions are functions which are evaluated at execution time to retrieve their names:
$function()
Given a URL parameter, this allows one to bypass the tedious if statement in order to know which function to call:
if($trigger == "retrieveUser")
{
retrieveUser($rowid);
}
else if($trigger == "retrieveNews")
{
retrieveNews($rowid);
}
else if($trigger == "retrieveWeather")
{
retrieveWeather($rowid);
}
to the much more compact:
$trigger($rowid);
There are security risks in using this so care must be taken.

Reuseable functions can be stored in a function library and called up by:
include("function.library.php");

Ch 5 - Arrays

Of note is the natural sorting function, natsort(), which sorts (10, 1, 20, 2) to (1, 2, 10, 20) instead of (1, 10, 2, 20).

Ch 6 - Object-Oriented PHP
Normal OO stuff that can be found in C or Java. The Properties feature in PHP can be used to extend the class by adding in new properties. A potential gotcha for constructors is that they do not call the parent constructor automatically.

Ch 7 - Advanced OOP Features
Again more of the same. However there are OOP features that are not supported such constructor, method and operator overloading. There is a section on reflection, showing how to obtain the class type, methods and parameters.

Wednesday 13 June 2007

Initial thoughts on PHP

Learning PHP through a book entitled Beginning PHP and MySQL 5: From Novice to Professional, Second Edition by W. Jason Gilmore.

PHP is a loosely-typed language due to its origin as a web counter script. This is inherent in the quick and dirty way it does things, one can put in a variable and assign and reassign it to various data types like strings, integers and floats. There is no need for declarations and type-casting, though it is available. Coming from a Java background, that is very off-putting. Acclimation would be easy but care must be taken to prevent any bad programming habits from creeping in.

The OOP aspect of PHP is a mix of C and Java coding styles. In that respect, it is pretty conventional and a Java programmer can take easily to it. Relatively simple websites should be easy to prototype quickly.

Thursday 7 June 2007

Installing CruiseControl - a continuous integration framework

The first step in installing CruiseControl (CC) is going to the SourceForge site and downloading the distribution. There are two flavors, a binary and a source distribution. You should choose the binary distribution as it is a trimmed version of the source distribution and is much simpler to use. After the download, extract the file to where you want to place it.

From here, you will want to follow the getting started guides for the binary and source distributions in order to set up CruiseControl. The binary distribution lets you run it 'out of the box' and you can see how a functional CC is supposed to work. Once you have the hang of it, you can try out putting your own project under CC. Using the guide for the source distribution, under the "Running the Build Loop" section, you can specify a place to store all your work.

Now the binary distribution sets the root of the CC directory to where the sh/bat executive file is called from. The requirement is that CC's lib and dist directories must be there. So the default is to call cruisecontrol.sh from the extraction place, that will also bind the project storage to be there. If you call the sh file from the specified place in the previous paragraph, CC will complain about not finding the lib and dist directories. The solution is to edit cruisecontrol.sh and hardcode the CCDIR to the extraction place, then call it from the project storage. Note that the binary distribution uses a projects directory instead of a checkout directory.

After that, follow the guide in creating config.xml and build-.xml, making changes as necessary. CC makes use of the Ant build tool so all the project compiling, testing and archiving are done using that. This site tells you how to make use of SVN in Ant instead of CVS.

Links:

Wednesday 6 June 2007

Continuous Integration

Continuous Integration (CI) is a software development practice of having team members integrate their work into the project frequently. Every integration is automatically built and tested so integration errors can be flushed out.

From Martin Fowler's article on CI, the practices to follow are:

  1. Maintain a Single Source Repository
  2. Automate the Build
  3. Make Your Build Self-Testing
  4. Everyone Commits Every Day
  5. Every Commit Should Build the Mainline on an Integration Machine
  6. Keep the Build Fast
  7. Test in a Clone of the Production Environment
  8. Make it Easy for Anyone to Get the Latest Executable
  9. Everyone can see what's happening
  10. Automate Deployment

Current practice at the moment only make use of steps 1 and 2. With step 2, though an Ant build script is present, the Eclipse IDE's automatic build is favored.

Step 3 - Tests are coded but currently they are run by hand, not in conjunction with the build. Step 4 - This is not adhered to, usually committing is done when a large task has been completed.

Steps 5, 6 and 7 are not relevant with the current practice.

Step 8 - Current practice is to ask developer for the latest code, use the source repository to transfer the code then build it. If the code is transferred to a new machine, custom settings have to be supplied before it can be built. Generally a lot of hassle and potential pitfalls.

Step 9 - Only the developer knows what's happening, anyone else has to be informed by that person.

Step 10 - The Ant build script has a deployment task. However that task only deals with production deployment which is set to a particular server and is not suitable for testing.

CI is supposed to lessen integration impact and allow for better communication between team members who are working on a project. Being a development team of one, there are no major issues. Anything that occurs is the result of that single developer and must subsequently be resolved by that person. However that will change as the team grows.

Wednesday 25 April 2007

Wicket (vs. Tapestry)

Wicket, as with Tapestry, is a view technology that uses plain HTML and Java code to render web sites. However where with Tapestry you have the controlling logic split between the Java code and html/page files, Wicket thrusts all logic into the Java code and use the html pages only to define the position of its components.

In addition, there is no xml configuration needed aside from the inclusion of the Wicket servlet into the web application. Any configuration is done in Java code. As such the components in Wicket can be reused and also inherit from other components, normal Java behaviour.

After trying Wicket for a few days, it is very much easier to use than Tapestry. The split logic in Tapestry forces one to constantly refer to the html pages for what components are used and their settings and the Java for the logic behaviour. Keeping all the logic in Java enables one to have a clearer picture on what the components are and what they do, and the html pages only need their wicket ids to function.

Another thing is the strict enforcement of wicket ids referencing their respective components. If one has a wicket id that is not referencing a component somewhere in the Java code, the webapp cannot render it and gives an error page. Similarly for a component that tries to add itself to a page without an id.

Shifting from Tapestry, one had to change their coding mindset from a single Java file for a html page. Wicket also requires that, however a base page can be defined and, taking advantage of inheritance, be extended by other pages. Components in one page can be reused in another and so on.

Tuesday 10 April 2007

Maven

http://maven.apache.org/

Maven can be run straight out of the archive, once the systempath variables are set. Following the 5 minutes guide, I created a 'Hello World' app, compiled, tested and finally packaged it into a jar. Going further, using 'mvn eclipse:eclipse' allowed it to be imported into the Eclipse IDE. There is also a plugin available for tight integration between Maven and Eclipse.

Dependency management is one prime reason for using Maven. The dependencies are downloaded from a remote repository and stored in a local repository that Maven creates. So for several projects, there will be only be one dependency copy. The default remote repository can be changed to an internal repository that will house a private copy of dependencies.

In regards to Ant, Maven makes for a more standardized build process, with little or no configuration needed to start creating, building and testing projects. It provides a common directory layout for each Maven project, easing orientation.



Wednesday 21 February 2007

Repository Pattern in lieu of DAO Pattern

Domain Driven Design - Inject Repositories, not DAOs in Domain Entities

Instead of the usual domain-DAO-database, a Repository is added so that it becomes domain-Repository-DAO-database.

The Repository contains domain-centric methods and uses the DAO to interact with the database.

Domain Driven Design : Use ORM backed Repository for Transparent Data Access

With Hibernate or JPA taking care of persistence and simplifying the DAO, the Repository can interact directly with the ORM and drop the DAO. So it becomes domain-Repository-Hibernate/JPA.

The link gives a general implementation of the Repository, which adopts a Bridge pattern to separate the abstraction (which will be exposed to the client code) and the implementation (which can swapped between Hibernate or JPA). It shows how the client Repository can extend from the general implementation.

The design is attractive, with the general implementation available to be used in any project. However it requires more thought to use it and and the unfamiliarity requires that more groundwork needs to be done before one can be proficient in it.

Tuesday 20 February 2007

Ubuntu 6.10, Edgy Eft

Downloaded and installed the latest version of Ubuntu, 6.10 - Edgy Eft. The ISO is a combination Live CD and installation CD, it loads up the Live CD first and offers an option to install.

Installation is pretty much the same as the procedure listed in the Apress book, "Beginning Ubuntu Linux". After installation, there were a lot of updates to download and install.

Ubuntu comes internet-ready, provided one has an active connection. Firefox is available and browsing can be done after installation.

Playing media files, music and video, however means downloading and installing codecs. The Apress book has a section on it, also there is a unofficial Wiki guide that lists a lot of how-to stuff, including codec installation. Following the guide requires one to be familiar with the command prompt though.

NTFS drives can be mounted to the filesystem in a read-only capacity. One would need to use FAT drives to share data between Windows and Linux.

The GUI offers more than enough configuration managers that the command prompt can be ignored for the basic user.

Thursday 15 February 2007

Spring-JPA-Tomcat

Created an Eclipse project that follows the 'Introduction to Spring 2 and JPA' pdf. This entails creating POJOs, the business objects of the application, which in this case are the Employee class and the Address class. The POJOs do not contain any business logic, possessing only their data fields and the associated getters and setters. Two constructors are added, a no-arg constructor and another constructor that sets all the fields with data.

The POJOUnitTest class, a Junit test case, is coded to unit test the Employee and Address classes. Their objects are created, inserted with data (via setter methods and also constructor) and then the data is verified in the test.

A service layer containing the business logic to use these POJOs is visualized in the EmployeeService interface, which contains abstract methods for dealing with the Employee class.

Up to now, it's been plain Java coding. Spring and JPA comes into play when one needs to add database support. JPA annotations are added to the POJOs, at the class and field names, to establish the mapping between objects and database counterparts. Spring's JPA support enables one to use its DAO API to code the implementation class which extends JpaDaoSupport and implements the EmployeeService interface, called EmployeeDAO.

Now we use Spring to string these POJOs up as beans in dwspring-service.xml. Here the provider is changed from TopLink to Hibernate and a database change to MySQL by modifying the entityManagerFactory bean's properties as shown below:
From:
property name="jpaVendorAdapter"
bean class="org.springframework.orm.jpa.vendor.TopLinkJpaVendorAdapter"
property name="databasePlatform"
value="oracle.toplink.essentials.platform.database.HSQLPlatform"

To:
property name="jpaVendorAdapter"
bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"
property name="databasePlatform"
value="org.hibernate.dialect.MySQLDialect"

Needless to say, the properties of the dataSource bean (driverClassName, url, username, password) need to be changed to reflect the use of the MySQL database.

Another test, an integration test this time, verifies that the EmployeeDAO implementation of EmployeeService works against an actual database. The classname is EmployeeServiceImplementationTest and it extends AbstractJpaTests. This allows implementation of the getConfigLocations() method, where one or more beans configuration files can be parsed by the Spring engine. This enables automatic dependency injection; when Spring loads the EmployeeServiceImplementationTest class, it will discover an unfulfilled dependency - a property of type EmployeeService. The engine looks through the dwspring2-service.xml file for a configured bean of type EmployeeService and injects it via the setEmployeeService() method. The relevant code is below:

public class EmployeeServiceIntegrationTest extends AbstractJpaTests {
private EmployeeService employeeService;
private long JoeSmithId = 99999;
public void setEmployeeService(EmployeeService employeeService) {
this.employeeService = employeeService;
}
protected String[] getConfigLocations() {
return new String[] {"classpath:/com/ibm/dw/spring2/dwspring2-service.xml"};
}


After that, in order to run the test, one needs to include all the dependency JARs from Spring and Hibernate libraries, which includes the MySQL driver file. A persistence.xml file is required by the JPA specification. It describes a persistence unit, though in this case, it is only there to satisfy specs. The file is placed in the META-INF folder and must be accessible through the classpath. The code is below:

persistence-unit name="dwSpring2Jpa" type="RESOURCE_LOCAL"

Now the web application comes into play. The UI for this is Spring MVC, and the first controller class created is MainController, which handles the initial incoming request. It obtains a list of all the employees in the system. Next the dwspring2-servlet.xml (dwspring2 being the name of the DispatcherServlet) file is configured with all the beans required by Spring MVC. The web.xml file contains the configuration of the DispatcherServlet and is located in the WEB-INF folder. MainController does not display the data it holds, instead it hands the data over to a view which resolves to a jsp file, home.jsp located in the jsp folder. home.jsp displays the id and full name of each employee, it also made links out of the ids. When clicked, the links go to the controller EmpDetailsController which gets all details about a particular employee (in other words, get the employee object). A command class, EmployeeDetailsCommand, is used to parse the link arguments into an object. Here, the only arguments being passed is the employee id, so EmployeeDetailsCommand has only one data field and a getter and setter.

EmpDetailsController passes the employee object it holds to the empdetails view which as before is resolved by the InternalResourceViewResolver by appending the appropriate prefix and suffix and returns /jsp/empdetails.jsp as the view handler. empdetails.jsp displays the employee's details.

home.jsp and empdetails.jsp use the css/dwstyles.css stylesheet to format their HTML. This only affect appearance.

The next step is the Eclipse WTP process, however it has been replaced with an Ant build. This step will compile the code, build a deployable WAR file (for deployment in a J2EE compatible Web tier container) and deploy that file to a Tomcat server. The WAR file consists of the compiled classes, the dependency library jars, the web content and the configuration files (context.xml, persistence.xml, dwspring2-servlet.xml, dwspring-service.xml, web.xml)

Before the WAR file can be deployed, a number of things had to be done. When a Spring JPA application runs on Tomcat, bytecode "weaving" during class loading is required for the JPA support to work properly. The standard Tomcat classloader does not support this. A Spring-specific classloader is needed.

Installing the Spring classloader:
1) Copy spring-tomcat-weaver.jar into the Tomcat's server/lib subdirectory. The spring-tomcat-weaver.jar library can be found in the dist/weaver of the Spring distribution.
2) Configure the context.xml file (located in META-INF) to let Tomcat know to replace the standard classloader for this particular web application.

Spring needed to hook into Tomcat's context loading pipeline so a ContextLoaderListener was added to web.xml.

Datasources are managed by Tomcat and are available through a standard Java Naming and Directory Interface (JNDI) lookup mechanism. The employee system runs as a Web application inside Tomcat and should obtain its datasource through Tomcat's JNDI. To accomplish this, the MySQL driver needs to be copied to Tomcat's common/lib subdirectory. Then configure the context.xml file, adding a JNDI resource. With a resource name of "jdbc/dwspring2", the configuration makes the JNDI datasource available through the name java:comp/env/jdbc/dwspring2. Next, add a resource reference in web.xml, making it available for use within the web application. Finally, dwspring2-service.xml must be modified to use the JNDI datasource.

The Spring engine needs to locate and process the bean configuration file (dwspring2-service.xml) for the POJOs in order to wire them up with the web application. The context parameter in web.xml must be configured with the location of the file.

The final step before deployment is to fill the database with data. The FillTableWithEmployeeInfo class was coded to do this. This class extends AbstractJpaTests. A great feature of tests based on AbstractJpaTests is that all database changes are rolled back upon completion of a test, allowing the next test to run quickly. However calling the setComplete() method within the test commits the transaction instead of rolling it back and makes the changes permanent. Which is what is done with FillTableWithEmployeeInfo.

Finally the WAR file (spring2web.war) can be deployed to Tomcat's webapps subdirectory and loaded. However the deployment was unsuccessful.

From Tomcat's Catalina log, it was a puzzling one-liner error about the context listener. Set up the web application logging by adding log4j.properties in order to find out what was wrong. The error was more verbose, essentially a ClassDefNotFoundException of a class in jasper-compiler.jar, which is located in Tomcat's common/lib subdirectory. Adding that file to the web application library only made it a ClassCastException. The project was stuck for a while.

Got hold of the springjpa project from MemeStorm, and tried to deploy that as well. Whilst doing that, changed the loadTimeWeaver property in dwspring2-service.xml from SimpleLoadTimeWeaver to InstrumentationLoadTimeWeaver to follow the springjpa project. Read that SimpleLoadTimeWeaver was only suited for testing, so for real deployment, InstrumentationLoadTimeWeaver or ReflectiveLoadTimeWeaver should be used. Whatever the problem was, it wasn't the loadTimeWeaver property as both applications still refused to deploy.

Eventually learned that it was the Spring class loader at fault. When Tomcat replaced the standard classloader with the Spring classloader, the Spring classloader does not have the classpath the standard classloader possess in order to access Tomcat's library jars. The solution was to add the attribute useSystemClassLoaderAsParent="false" to the Loader in context.xml. The error was hurdled but a new one popped up.

The new error was that no persistence units were parsed from META-INF/persistence.xml. This one was simple to solve, the META-INF/persistence.xml needed to be in the root of the class package. So a move to the web application's WEB-INF/classes and it is solved.

The next error was talking about a java agent and prior reading indicates this had to do with the InstrumentationLoadTimeWeaver, basically it needs a Spring agent to be loaded into the JVM. To accomplish this in Tomcat, the line set JAVA_OPTS=%JAVA_OPTS% -javaagent:"%CATALINA_BASE%\server\lib\spring-agent.jar" was inserted into catalina.bat in Tomcat's bin subdirectory. The spring-agent.jar mentioned in the line can be found in the dist/weaver of the Spring distribution. The file was copied to Tomcat's server/lib subdirectory.

Success! Deployment went without a hitch and the web application can be accessed, displaying the list of employees previously inserted into the database and their details when their link was clicked.

Messed around with the loadTimeWeaver property, changing it back to SimpleLoadTimeWeaver and even commenting it out as Hibernate apparently does not require it. The web application still runs fine.

One last change was the addition of index.jsp and include.jsp, which were from a previous project. A slight modification to the welcome file in web.xml and one can now access the web application via http://localhost:8080/spring2web without any need for a filename.

The file structure of the spring2web web application:

spring2web
|-css
| |-dwstyles.css
|
|-jsp
| |-empdetails.jsp
| |-home.jsp
| |-include.jsp
|
|-META-INF
| |-context.xml
|
|-WEB-INF
| |-classes
| | |-com.ibm.dw.spring2.*
| | |-com.ibm.dw.spring2.web.*
| | |
| | |-META-INF
| | | |-persistence.xml
| | |
| | |-log4j.properties
| |
| |-lib
| | |-*.jar
| |
| |-dwspring2-service.xml
| |-dwspring2-servlet.xml
| |-web.xml
|
|-index.jsp


The steps taken in Spring-JPA-Tomcat:
- POJOs
- Unit Test
- dwspring2-service.xml
- Integration Test
- META-INF/persistence.xml
- Spring MVC
- dwspring2-servlet.xml
- web.xml
- META-INF/context.xml (not at the same location as persistence.xml)
- (Tomcat)/common/lib <-- mysql-connector-java-5.0.4-bin.jar
- (Tomcat) /server/lib <-- spring-tomcat-weaver.jar, spring-agent.jar
- (Tomcat)/bin/catalina.bat <-- Java agent command

Monday 12 February 2007

JSP Front End Testing

With the JSP front end, all correct operations dealing with valid employee ids will work.

As there is no error correction and no validation, there are a lot of bad cases:
Insert: Anything can be inserted. An employee id with null for every field is possible.

Update: With a valid id, all the fields can be updated with junk data, even nulled. Without a valid id, the server creates an exception.

Delete: Valid ids will be deleted, invalid ones will create an exception report.

Find: Valid ids will be displayed, invalid ones will display a blank page.

Monday 29 January 2007

Spring in Action

Inversion of Control
Inversion of Control (IoC) is the reversal of responsibility with regard to how an object obtains references to other objects. Normally, each object is responsible for obtaining its own references to its dependencies. With IoC, objects are given their dependencies at creation time by an external entity which handles all the objects in the system.

Dependency Injection

Dependency Injection is merely a more apt name for IoC, given that dependencies are injected into objects. There are 3 types of IoC:
1) Interface Dependent - Dependencies are managed by implementing special interfaces.
2) Setter Injection - Dependencies and properties are configured via the setter methods.
3) Constructor Injection - Dependencies and properties are configured via the constructor.

IoC allow the Java objects to be loosely coupled, interacting through interfaces. It allows the programmer to set up and configure the objects as desired, while leaving little trace of it in the code itself.

Application design for Spring would be based on interfaces. Overall, the code would be normal POJO, until arriving at the Spring setup; a class using all the objects coded and a Spring configuration file, usually XML.

Thursday 18 January 2007

Native Hibernate vs. Hibernate JPA

Native Hibernate uses only the Hibernate Core for all its functions. The code for a class that will be saved to the database is displayed below:
package hello;
public class Message {
private Long id;
private String text;
private Message nextMessage;
// Constructors, getters, setters...
}
As can be seen, it is merely a Plain Old Java Object (POJO). The relational mapping that links the object to the database table is in an XML mapping document. The actual code that will create and save the object is below:
package hello;
import java.util.*;
import org.hibernate.*;
import persistence.*;
public class HelloWorld {
public static void main(String[] args) {
// First unit of work
Session session = HibernateUtil.getSessionFactory().openSession();
Transaction tx = session.beginTransaction();
Message message = new Message("Hello World");
Long msgId = (Long) session.save(message);
tx.commit();
session.close();
// Shutting down the application
HibernateUtil.shutdown();
}
}
Session, Transaction and Query (not shown) objects are available due to the org.hibernate import. They allow a higher-level handling of database tasks than DAO using JDBC.

Hibernate JPA is accessed through the use of Hibernate EntityManager and Hibernate Annotations. The Hibernate EntityManager is merely a wrapper around Hibernate Core, providing and supporting JPA functionality. Thus the change in the code can be seen below:
package hello;
import javax.persistence.*;
@Entity
@Table(name = "MESSAGES")
public class Message {
@Id @GeneratedValue
@Column(name = "MESSAGE_ID")
private Long id;
@Column(name = "MESSAGE_TEXT")
private String text;
@ManyToOne(cascade = CascadeType.ALL)
@JoinColumn(name = "NEXT_MESSAGE_ID")
private Message nextMessage;
The XML document with all the relational data has been removed and replaced with inline annotations, which is provided by the javax.persistence import. The only difference between the Hibernate POJO and JPA POJO is the annotations. The code itself will run fine, the annotations make it a persistent entity but will do nothing unless Hibernate goes through it. JPA can glean enough information from them for the ORM and persistence tasks. The HelloWorld code:
package hello;
import java.util.*;
import javax.persistence.*;
public class HelloWorld {
public static void main(String[] args) {
// Start EntityManagerFactory
EntityManagerFactory emf =
Persistence.createEntityManagerFactory("helloworld");
// First unit of work
EntityManager em = emf.createEntityManager();
EntityTransaction tx = em.getTransaction();
tx.begin();
Message message = new Message("Hello World");
em.persist(message);
tx.commit();
em.close();
// Shutting down the application
emf.close();
}
}
The Hibernate import is gone, replace by javax.persistence. The EntityManagerFactory, EntityManager and EntityTransaction run the database tasks.

Both API seem similar and choosing one over the other is a matter of preference. Native Hibernate is the cleaner one, with the relational data put into an XML document. Hibernate JPA is standardised with Java and can be ported easily.

Other JPA implementations:
Open-source:
GlassFish
Apache OpenJPA

Commercial:
SAP NetWeaver
Oracle TopLink
BEA Kodo

HIbernate

Hibernate is an open-source project that handles the role of the persistence layer, becoming the middleman between the business logic code and the database data. Its expressed purpose is to free developers from the tedious and common coding of database tasks such as queries, insertions and deletions.

Take the previous DAOExercise as an example, where most of the code dealt with inserting, selecting and deleting data from the MySQL database. Hibernate would handle these mundane tasks and allow the developer to focus more on the business logic and rare SQL code exceptions.

HIbernate incorporates ORM (object/relational mapping), which will map objects to their proper tables in the database. Such logic would be used when hand-coding DAOs using raw JDBC, and it would be up to the developer to track and maintain any changes in either object or database table. Use of Hibernate simplifies matters and make maintenance easier.

Wednesday 17 January 2007

Object Pool Pattern

The Object Pool pattern dictates having an object (usually a singleton) maintain a pool of reusable objects that can be checked out and in by clients who will use them. Connections to databases are the perfect candidate for this pattern.

The ConnectionFactory (CF) in the DAOExercise can be the connection-pool manager with the clients being the DAO classes. A DAO class request a connection from the CF for a query. The CF checks its pool; if there is no objects in the pool, it creates one and sends it to the DAO class, else it will pop out the object and return it. Once the DAO class is done with the query, it sends the connection back to CF and the CF puts it into the pool. The pool may have a maximum number of objects, whereby if all objects have been checked out and the CF tries to create one, it cannot and must wait for the return of an object to honour the request by the DAO class.

The Object Pool pattern benefit designs with a resource creation that is expensive, limited in number or slow that must be shared out to clients utilizing that resource. Real-world examples of the design are car rentals and timesharing.

Factory Method Pattern vs. Abstract Factory Pattern

The Factory Method pattern is to define an interface for creating an object but let subclasses decide the class to instantiate. For example the UIBuilder class contains two method stubs. The subclasses (EnglishUIBuilder, MalayUIBuilder) have to implement these stubs in their own way.

The Abstract Factory pattern is to provide an interface for creating families of related or dependent objects without specifying their concrete classes. This pattern is often used in conjunction with the Factory Method pattern, thus it can be seen as a factory of factory objects.

The UIFactory static method returns a subclass of UIFactory which in turn creates the appropriate UIBuilder object. The UIFactory subclass returned is determined by reading a config file and using the default value.

Tuesday 16 January 2007

MySQL Class

Prior to the addition of the ConnectionFactory, connection to the database was handled by the MySQL class. The class was to return a connection to any caller via a getter method.

The connection was made static in the class, letting only one instance to be created. All the methods were static, thus the class need not be instantiated. With that, a static initializer was added to run once the class was loaded, establishing the connection.

For testing purposes, one connection is enough. In a live environment, the one connection that MySQL has would be swamped with requests. A pool of connections would be best for that sort. So at the time, the choice for a single connection was appropriate however it would not be scaleable outside test conditions.

The static initializer was to initialize and establish the connection. This was done when the class is loaded, in other words when a static method is called. That will be the getter method being called. Initialization can take place inside the getter or another method, however since theus connection is static and only needs to be established once, the static initializer is used.

When one has an open connection, one should provide a way of closing it. That was the closeConnection method. Ideally, once the connection has been used by a class, the class should close it. Going through DAOExercise; discovered that once closed, a connection cannot be reopened. Thereafter the only use for it was when all the tests were done in the AllTests class.

In retrospect, using the static initializer was not a good idea. It would be better to stick it in the getter method:

public static Connection getConnection()
{
if(conn.isClosed() || conn == null)
{establishConnection();}
return conn ;
}

This way, closeConnection can be used.

Addendum: Static reference creates only one instance for the class; all class objects share that static reference. As MySQL is never instantiated, that point is moot. Not so for the DAOExercise objects, two EmployeeDAOImpl objects will share the static connection.

Monday 15 January 2007

Abstract Factory Pattern for DAOExercise

The Abstract Factory Pattern allows for multiple factories which share a common theme to be streamlined into one class. In DAOExercise, this class is the ConnectionFactory which will create the appropriate factory for use. This class currently has a concrete implementation for MySQLConnectionFactory and can be further extended (OracleConnectionFactory, OCBCConnectionFactory, etc).

This implementation allows the underlying database (MySQL) to be divorced from the actual DAOExercise (Employee, Address, Dependent) so any database change (e.g Switch from MySQL to Oracle) can be coded and inserted into the ConnectionFactory, touching very little of the code for DAOExercise.

Another way of looking at ConnectionFactory is that it is a factory of factories. The ConnectionFactory determines which factory is to be handed over to the client code via a string which must be set by the client code. The MySQLConnectionFactory will create and return a connection to the MySQL database. This outlook can be confusing when all the client code sees is the ConnectionFactory reference and not the actual factory object.

Using this pattern for DAO creation (a DAOFactory generating EmployeeDAOFactory, AddressDAOFactory and DependentDAOFactory) is impossible with the current design as the implementing classes are not related to each other. The pattern can get around this with the use of the Adapter pattern but that would still require a major rewrite of the design. At best DAO creation is served by a single factory (DAOFactory) which will generate all three classes. Also unlike the ConnectionFactory, where the objects need different initializing data, the DAOFactory objects are self-contained.

As the purpose of a factory is to generate objects for use, a single instance of it would suffice. Coding it so that it complies with the Singleton pattern would enforce this single instance. However, the pattern is not a requirement. Having multiple factories would not be a problem, save for efficiency and design.

In MySQLConnectionFactory, there is a Properties variable which is used to read a file containing all database-specific information. Aside from the connection data, it stores all the SQL queries for that database. These queries are used in the DAO classes. It is not possible to create DAO objects with the file.

Thursday 11 January 2007

DAOExercise Architecture

Having implemented and written DAO code that accesses a MySQL
database, there need to be changes if that code is to access an
Oracle database. Though both databases use SQL, code that works
for one may be broken or have unexpected results in the other.
So the SQL code needs to be tested and rewritten as necessary.
The driver class needs to be rewritten with the proper commands
and authentication so as to get the right connection to the Oracle
database.

Whenever the database changes, the Java code has to be
rewritten to accommodate it. This is due to the hardcoding
of the SQL and driver information. The only way to not rewrite
all that is to throw it to a go-between which interacts with
the database and leaves the Java code handling objects only.

Tuesday 9 January 2007

SQL Injection

SQL Prepared Statements are apparently not subject to injection attacks. The precompiled code will view the wildcard parameters as data only. Attempts to subvert the code proved futile, with no change to the database. Proper arguments work and the code executes.

Monday 8 January 2007

JDBC Basics

Learning how to use SQL via the Eclipse Java IDE and MySQL. Had problems connecting to the MySQL databases until I remembered to start up the service (>_<).

The most crucial part was getting the connection through the DriverManager class and the settings for it. After that was the creation of tables and filling it with data. The tutorial at http://java.sun.com/docs/books/tutorial/jdbc/basics/tables.html was vague on that part and I had to go look elsewhere to do it.

Used SELECT to print out the table values with the help of the ResultSet and Statement classes. Then looked into updating the data via Java methods instead of normal SQL commands.

Prepared Statements are Statements given an SQL command at creation time. With wildcard parameters in the command, one can use it repeatedly, changing the parameters at will. Looked at the joining of two tables.

The last was transactions, how to commit several statements as an atomic action. The Savepoint methods allowed part of the transaction to survive a rollback.

One thing to keep in mind when building strings SQL commands is the spacing.

SCJP Exam

4th January 2007, the day of the SCJP exam, scheduled for 10:00AM. Arrived at the testing centre, was told to leave all my things in a locker. That included the pens I brought, prompting the question - will I be provided pen and paper? Apparently I won't, they have marker pens and thin mousepad-sized writing-board sheets (1 pen, 2 sheets and a duster per tester). That was a let-down. Though it's understandable why they do so, I would be more comfortable with the old pen-and-paper standard. Using a marker pen to record one's answer is somewhat irritating.

The questions were much simpler than expected, after the torture of going through the twisted self-test questions and mock exams. Even knowing that practice questions were harder than the real ones, I was still startled. 2 hours and 55 minutes is enough time to make an initial leisurely pass answering the 72 questions, bypassing over those with long convoluted code - sure to take some time to understand, a 2nd pass to wrap up unanswered questions and a quick 3rd pass to go through all the questions, minus the drag-and-drop ones. Spending time to record the answers for the drag-and-drop questions is time that could have been better spent elsewhere. Reanswering those drag-and-drop questions is definitely a pain, since the answers are cleared when you want a second look at the questions.

Overall, I think the preparation done for the SCJP exam was sufficient, as most of the questions I answered with confidence. Only with one question did I have really some doubt about what would happen (Oh, for a compiler at that time). So though I entered the exam with trepidation, I ended it with a very confident outlook. Of course, getting 13 questions wrong knocked me down with a good dose of humility, though a pass of 81% is not bad.

Looking over the breakdown of the score, I did pretty good in Declarations, Initialization and Scoping, Collections/Generics, Fundamentals and got perfect marks for Flow Control. The areas where I was lacking were API Contents, Concurrency and OO Concepts. I was surprised about Concurrency, admittedly it is a complex topic but I did not struggle with any questions regarding it except for one which looked like a deadlock situation. API Contents and OO Concepts were no surprise to me, the mocks that did breakdown listed them as problem areas. However I elected to focus on Generics and Collections, feeling I had a shaky understanding of them, and it paid off.

What I got from the entire affair? Aside from the SCJP certification, which I am not sure would be that useful, the one-month-plus training established a solid grounding in the standard Java language. It exposed me to the new features of Java 1.4 and 1.5. I'm not happy with Generics, thinking about it makes me feel like it is an abstract topic (in the literal, not Java, sense). I can use it for collections type-safety, though thinking about infesting wildcards, super and extends into classes and methods drives me off the deep end.

=^.^= At last I can bind the SCJP book with chains, weight it down with rocks and dump it in the deepest, murkiest river I can find, all the while dancing and cackling madly. =^.^=