Saturday 30 November 2013

Windows 8.1 : A big improvement over Windows 8

After all the negative reviews and tales of doom following the release of Windows 8, I recently got the chance to upgrade my Windows 8 machine to Windows 8.1. For starters, the upgrade was free which was a pleasant surprise and while the upgrade which was carried out through the Windows store took a good hour or so to complete it went without any problems.
Windows 8.1 is all about smoothing the feathers that Windows 8 ruffled. In the drive to be aggressive in the phone and table segment, Windows made some changes to the basic features of the operating system that didn't go down well with the users.The biggest annoyance was the absence of the Start button. Users were expected to get used to the tiled view of invoking and accessing programs. To enter into the classic desktop view, there was a desktop tile that needed to be used.
Windows 8.1 fixes these annoyances by bringing back the Start button albeit not in the way as it appeared before but with a slightly different flavour. The main programs can now be invoked from the Start button by right clicking on it.
Another change that Windows 8.1 brings in is the boot to the desktop option. Users can now choose to start with the classic desktop view that they have been used to rather than the tiled view that comes up by default. This change is again a crowd pleaser and allows users who were comfortable with the old interface to accept this new avtaar of Windows.
Overall, after upgrading to Windows 8.1, I am much more comfortable with the operating system than I was with the basic Windows 8.0 so if you are struggling with Windows 8.0, check out the upgrade to Windows 8.1 in the Windows store before Microsoft removes it from the Free section :-)

Wednesday 30 October 2013

Presentation at eResearch Australasia 2013

Last week, I was at eResearch Australasia 2013 which is turning out to be the annual star event for eResearch in Australia. The conference had some interesting presentations made by researchers from various research organisations and I met an attendee who had traveled all the way from far flung Pakistan for this conference.  While CSIRO staff had several presentations in the conference, I was involved in a couple for which we received a lot of positive feedback.Overall the conference was very interesting and it was great to showcase our applications and connect with other researchers who are working on solving big data and data management problems.

Saturday 14 September 2013

MongoDB Aggregation vs Map Reduce

According to the release notes of mongodb 2.4, the Javascript engine has been changed from SpiderMonkey to V8. This is expected to give mongodb's map reduce the ability to run multiple threads and improves performance. The Aggregation Framework is expected to out run Map Reduce hands down since the former runs on compiled C++ code while the latter has to rely on the Javascript interpreter and conversions from JSON to BSON and vice versa to load dataset and store results. 
In any case, it is always an interesting exercise in attempting to answer a problem using the Aggregation Framework and the Map Reduce so towards this end I picked up a sample mongodb dataset named images with 90,017 records with the objective of counting the number of times a particular tag occured in an image record.

> db.images.count();
90017
> db.images.findOne();
{
        "_id" : 1,
        "height" : 480,
        "width" : 640,
        "tags" : [
                "dogs",
                "cats",
                "kittens",
                "vacation",
                "work"
        ]
}

The objective is to count the number of times, a particular tag occurs in the data set.

In the Aggregation Framework, this could be achieved by:

var c = db.images.aggregate(
  [{"$unwind":"$tags"},
  {$group:{_id:{Tag:"$tags"}, "Tag_Count":{$sum:1}}},
  {$sort: {"Tag_Count": -1}},
  {$limit :5}
  ]);
printjson( c ) ;
Results of the Aggregation Query
The Map Reduce solution was as follows:
map = function() {

if (!this.tags) {
       return;
    }
for(var i in this.tags){
   key = { Tag: this.tags[i] };
   value = 1;
   emit(key, value);
  }

}

reduce = function(key, value) {
    return Array.sum(value);
}

result = db.runCommand({"mapreduce" : "images",
"map" : map,
"reduce" : reduce,
"out" : "tag_count"});
printjson( result ) ;

Results of querying the collection produced by Map Reduce


Saturday 17 August 2013

MongoDB - A database for Big Data

The need for managing large datasets that grow very rapidly has always been there. With social networking sites, generating millions of  lines of data every minute, the need to store, process and analyse large datasets is not only required but also critical for surving in the race to remain relevant. This scenario has given an impetus to the rise of NO SQL databases. NO SQL or Not Only SQL databases can be divided into 4 main categories :
  1. Key-Value databases : Voldemort
  2. Graph databases : InfiniteGraph, Neo4J
  3. Document databases : CouchDB, MongoDB
  4. Column Family stores : Cassandra, HBase, Hyperbase
  • MongoDB is a non-relational JSON document store or database. It doesn't support the relational algebra that is most often expressed as SQL.
  • Documents are expressed as JSON and are stored within the MongoDB database in the BSON (Binary JSON) (bsonspec.org) format.
  • BSON supports all the data types available in JSON and a few more. It can support Strings, Floating-point numbers,Arrays, Objects and Timestamps data types.
  • MongoDB supports documents in the same collection that do not have have the same schema.This is referred to as supporting dynamic schema or it is schemaless.
  • There is no SQL, no transaction management and no Joins. 
  • The fact that there is no transaction management across multiple documents and no JOINs makes MongoDB a better suited for scalability and performance.


Problems with traditional RDBMs and the need for MongoDB type No-SQL databases :
Traditional RDBMSs are based on concepts that promote strong referential integrity, data normalization and transaction management. This implies that every data model will have several database tables and to satisfy a query, it might need to perform a JOIN. JOINs are not your best friend when you are looking for speed and performance.
Transaction Management is another key area that provides reliablity and data consistency. However this leads to a drop in performance. 
MongoDB does not utilize JOINs or provide Transaction Management. This results in a performance boost and data access is very fast as compared to Traditional RDBMSs.
While Transaction Management is not supported within MongoDB, it does however guarentee atomic transactions. Atomic transactions are updates affecting one document only. This one document may contain other sub-documents.
MongoDB uses binary-encoded JSON (BSON) which stores different data types efficiently, and the fact that no storage is allocated for fields that do not exist in a particular document, 
implies that documents which do not have a complete set of entries, significant savings in storage can be achieved in comparison to RDBMs, where space must be reserved in every row for every field whether populated or null
Large document sets can also be split (or sharded) over multiple servers and automatically redistributed when additional servers are added for additional scalability
Real world use cases : SAP, Sourceforge, MTV, Twitter (http://www.mongodb.org/about/production-deployments/)

Thus, MongoDB is all about performance, scalability and speed, however there are still some scenarios where MongoDB might not be the best fit. These are as follows:
1. Not suited for an application requiring transaction management.
2. Designed to work behind fire walls so it has less security relative to RDBMs.
3. Documents in MongoDB are limited to 16MB. Once a collection reaches this size, it has to broken up and hosted across various shards.

Friday 2 August 2013

Securing Fedora Commons 3.6.2 with XACML Policies.

The Flexible Extensible Digital Object Repository Architecture aka Fedora Repository uses XACML  based policies for authentication and authorization. XACML is an XML based policy language that is used to define access control lists and secure applications using a standard policy based approach.
The latest release of Fedora 3.6.2 promotes  the JAAS based FeSL as the default security layer. FeSL which was introduced in 2011 is designed to improve upon the legacy XACML based security scheme that has been Fedora's backbone since its release. While FeSL simplifies security, XACML still retains its relevance in terms of managing and setting up access to the API-A and API-M interfaces. The API-A interface provides a read-only access to the repository's properties and its objects. API-M on the other hand enables management of the repository and allows edit access to the contained objects.
While Fedora ships with a basic set of XACML policies that provide a basis for securing access to the two interfaces, there will be scenaros when authoring a custom policy will be required. When attempting to write the policy writing guide is a good place to start. XACML policies are rule based and generally enforce either an PERMIT or a DENY result to a specific resource. Apart from these two results, there are the the Not Applicable and the Indeterminate results too.
When attempting to understand the application of the policies and the final results that are generated, the two key points to take away is that the DENY rule supercede's the PERMIT rule.  For example, if an administrator is PERMITTED to access the API-A interface but there is a DENY rule on a specific API-A operation that applies to all users including the administrator, then the administrator will not have access to that specific operation.
Secondly, access to a resource has to be explictly granted.  For example, if there is a DENY rule that limits access to the API-A interface to all non-admin users, it does not explicitly imply that administrators will be able to access the API-A interface. There has to be a PERMIT rule that gives them that access. Thus, while designing policies, do remember to check out section 3.3 in the policy enforcement guide. It might save you hours when you are trying to figure out why a certain user cannot access an interface !!

Saturday 13 July 2013

MonjaDB - The Eclipse plugin for MongoDB

MonjaDB is an Eclipse plug-in for connecting to a mongoDB server and manipulating data records. It provides a simple intuitive interface for viewing, querying and performing CRUD operations on JSON documents stored within the database. As described in the screenshot below, the list of available databases is shown in the DBTree window. In this case, the blog database has been selected and from within it the posts table is active. Data from the posts table is visible in the central window, titled  Document List. The windows to the right of the Document List provide an interface for editing a particular data row.

monjaDB Interface


The bottom windows titled Actions and Saved Actions give the ability to re-run any database commands or queries and to save those commands for further reference. Finally, the console window captures log statements generated by the MongoDB server.
Overall, monjaDB is a simple yet powerful MongoDB GUI client that provides the ability to perform basic database operations and run queries against a MongoDB database.

Wednesday 3 July 2013

M101J : MongoDB for Java Developers by 10Gen

Recently, I undertook the MongoDB for Java developers course provided by 10gen Education. The course duration is 7 weeks with an estimated effort of 10 hours per week. Delivery of the course material  was online using videos and every week there were weekly assignments and at the conclusion of the 7th week, there was an exam.

As a part of the course-ware, every week there were several short video lectures that discussed various topics ranging from the installation of MongoDB to Sharding and Replication. Each video duration ranged from 2 minutes to 8 minutes and it would take around 2-4 hours to get through all the videos for the given week and complete the associated quiz questions. At the end of the course videos, every week there were also a set of homework questions which had to be completed before the end of the week. These homework questions had a 50% weight-age towards the final score while the final exam made up the other 50% of the course grade.

Overall my experience with the course was very positive. The course has been well designed and the instructors are closely related to the development of MongoDB at 10gen and so have a very clear working on the internals of MongoDB . I had no experience with MongoDB prior to undertaking this course and by the end of the 7th week, I was confident in setting up Replica Sets and Sharded servers.
The only irritant to the course was that it became clear that 10gen Education were running the course for the 1st time for Java developers as there were a few cases when they mixed up data sets and code from the Python version of the course which in turn led to confusion among the students. An unfortunate miss was incorrectly marking a correct answer wrong in the final exam which raised the ire of the students but  most of these problems were ironed out within a day. Apart from these issues, the course material and teaching was very good and if one is looking to learn MongoDB, this course would be a good place to start.

Finally, my exam score. I got 90% in the course. The 10% I lost was in the final exam where I got 2 questions wrong. According to the stats released my 10Gen, "Of the 7,105 students enrolled, 1,434 students completed the course successfully, a completion rate of 20%." To achieve a grade of completion, one needed to achieve a mark of above 65%.

Friday 7 June 2013

Oracle Java Certification SE 7 : Changes to the exam process

The Java Certification process has undergone a major over haul since Oracle bought out Sun. Since I started looking into the certification process, I thought I'd summarise the changes in the certification process for Java  SE 6 / 7 in the last 3 years.The following table lists some of the major changes to the Oracle Java Programmer certification process.

Exam Code
Number of Questions / Pass Percent
Time For Exam
Certification Name
Is Exam Still Available?
IZ0-803
90(77%)
140mins
Oracle Certified Java Associate /Java SE 7 Programmer I
Yes / Since Oct 2011
IZ0-804
90(65%)
150 mins
Oracle Certified Java Programmer /Java SE 7 Programmer II
Yes / Since Oct 2011 
IZ0-851( Java 6)
60 (61%)
150 mins
Oracle Certified Java Programmer 6
Yes / Since Oct 2011
CX 301-065 ( Java 6)
72/47 (65%)
210 mins
Oracle Certified Java Programmer 6
No / Replaced by  IZ0-851

To summarise, as on June 2013, if you do not hold any prior certification the certification path to becoming a Oracle Certified Java Programmer SE 7 requires, two exams to be taken (IZ0-803 & IZ0-804).  If holding a prior certification, then it is just one upgrade exam (IZO-805)
Please take a look at this post where I am collecting exam preparation resources.

Saturday 1 June 2013

Configuring Solr 4.0 to index data from a MySQL database


In a previous post, I documented the steps in setting up Solr 4.0 on a Tomcat 7.0.28 instance. In this post, I'll describe the steps involved in configuring the same Solr instance to index data from a MySQL database.

1.Set up another Solr core.

A Solr core holds the configuration details for connecting to a data store as well as the indexed data of the data store.
The simplest way to set up a core is to copy the collection_1 core and re-name the copy 'test_core'
To make Solr aware of this new core, bring up the Solr Admin console at http://localhost:8080/apache-solr-4.0.0/#/
Select the core-admin option and add core and supply the parameters for name, instanceDir, dataDir, config and schema.

As a result of adding this core, the solr.xml file in the root of the solr_home directory will have the following entry

2. Set up the required libraries

Since to index data from the MySQL database, we will be calling a DataImporter, we will need the Solr libraries. The lib directory is available in your downloaded unzipped version of Solr at c:\apache-solr-4.0.0\dist
Copy all jar files within this directory and transfer them to your solr_home. The solr_home directory should look similar to the screenshot below



I could have transferred these jars to my Tomcat/lib and have solr use them from there but I was worried about conflicts with existing jars.
Finally, tell Solr about this lib directory by modifying the solr.xml file and adding the sharedLib="lib" element.
The solr.xml looks like this.


3. Configure your test_core to talk to the database.

To enable this, you'll need to follow the instructions given on http://wiki.apache.org/solr/DIHQuickStart
To summarise, essentially you will :
(a) Modify the solrconfig.xml and add the defintion for a DataImportHandler


  
    data-config.xml
  


(b) Create a data-config.xml file in the same directory as your solrconfig.xml and specify the database connection parameters


  
  
    
      
      
      
    
  

Ensure that the name values in the element have corresponding matches in the schema.xml
For example : The value 'description' should exist in the schema.xml

Finally, we are all set up to import records from our MySQL test schema.

4. Select the test_core in the Solr Admin console and click on the DataImport option which should be last option available in the menu.
Clicking on the DataImport option will bring up the DataImporter Interface, select the Clean and Commit checkboxes and click on Execute Import.

When the indexing is complete, you will see a message giving the number of documents added or deleted.
To check if the indexes have been created as expected and we are getting some sensible results back, perform a Search using 'Query' option
available under the 'test_core'

In my case, I searched for the term 'Hyundai' as I knew of one record having that particular value in the column that has been indexed.
And we get 1 record returned.



Tuesday 21 May 2013

JExcel vs Apache POI : Reading and Writing Excel Files

JExcel is a relatively simple and robust Java based library for manipulating Excel (.xls) files. While it has relatively less functionality than its bigger and more popular brother, Apache POI, it offers a simple API for Excel manipulation.Some features that made me choose JExcel were its relatively smaller memory foot print, simple API and upto date documentation.
On the other hand, some drawbacks that made me wary of choosing the tool were its lack of support for .xlsx files and the fact that the last release was in October 2009, more than 3 years ago. This discussion has listed a significant number of differences between JExcel and POI and while it may appear that Apache POI would be the obvious winner, it all boils down to your requirements. After all you can travel from point A to B in a car or do the same trip in a truck. In situations where the requirements are not very clear, are extensive  or might require support for .xlsx files in the future, Apache POI might be a better choice but if your requirements are clear, you need a fast and efficient Excel generating tool with minimum fuss, choose JExcel.
  
  
   net.sourceforge.jexcelapi
   jxl
   2.6.12
  

  

Thursday 9 May 2013

Java 7 : I/O using the NIO package

The NIO package in Java 7 is an important and much needed update to the Java I/O mechanism. With regards to I/O, the java.nio.file and the java.nio.file.attribute packages are key. An important change to the previous mechanism is the introduction of the Path object which is analogous to the File object. Other changes are documented here.

A simple example that reads in a list of entries from a text file and appends a special character at the end of every line is given below:

import java.io.BufferedReader;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class FileReaderAndSpecialCharacterInserter {
             
     private static String SPECIAL_CHARACTER = ",";

     /**
       * @param args
      */
     public static void main(String[] args) {
     FileReaderAndSpecialCharacterInserter fileReader = new FileReaderAndSpecialCharacterInserter();
                                fileReader.readFile();

      }
                private void readFile()
                {
                                StringBuffer outputString = new StringBuffer();
                             
                                Path path = Paths.get("c:/test.txt");
                                Charset charset = Charset.forName("US-ASCII");
                                try (BufferedReader reader = Files.newBufferedReader(path, charset))
                                {
                                    String line = null;
                                    while ((line = reader.readLine()) != null)
                                    {
                                        System.out.println(line);
                                        outputString.append(line);
                                        outputString.append(SPECIAL_CHARACTER);
                                    }
                                }
                                catch (IOException x)
                                {
                                    System.err.format("IOException: %s%n", x);
                                }
                             
                                System.out.print("The final output string is : -> " + outputString.toString());
                }
}

---------------
As evident in the example above, the Path object sets up a channel to the file to be read and essays the role played by the File object in the I/O mechanism available upto Java 6. The advantages of this new File I/O mechanism are mainly scalability and error handling  and are documented in detail here.

Thursday 25 April 2013

Setting up Solr 4.0 on Tomcat 7.0.28

Generally every user centric application requires a Search capability. With Google easily leading the Search domain, user expectations are generally high when they come to a web-application and perform a search. If giving user's the Google experience is not possible, consider Apache's Solr  which is a popular, open-source Java based search tool that runs Lucene under hood.
While Solr comes packaged with Jetty and has a basic tutorial, that provides pointers on its different features, setting up the tool to run within an existing Apache Tomcat instance can require a bit of effort as there are several files and folders and it is not immediately clear which files control the configuration of the tool. This post will document the various steps involved in setting up Solr to run with Apache Tomcat 7.0.28 on a Windows server.

Step 1 : Download a copy of Solr and unzip it  to apache-solr-4.0.0.

Step 2: Create new directory which will be the home of Solr c:\solr_home and copy the following files and folders into it.  ( You will find  them within  the unzipped apache-solr-4.0.0 folder).
bin
collection1
solr.xml
zoo.cfg

collection1 is an existing example which we will load up into our Tomcat.Within the directory collection1, you'll find two other folders conf & data. Just verify that these are present.

Now we'll connect Solr to Tomcat.

Step 3:  Copy the apache-solr-4.0.0.war (which you will find in apache-solr-4.0.0\dist)  into your Tomcat web-apps directory.

Step 4:  Finally, the last step. Tell Solr where the existing search configuration files for collection1 reside.
To do this add the location of the Solr Home directory to your Tomcat JAVA_OPTS. I am setting this value in the setenv.bat file and the declaration looks like this :

set JAVA_OPTS=%JAVA_OPTS% -Dfile.encoding=UTF-8 -server -Xms1536m -Xmx1536m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+DisableExplicitGC -Dsolr.solr.home=C:/solr_home

Step 5:  Start Tomcat and browse to your loaded Solr instance at http://localhost:8080/apache-solr-4.0.0/#/ You should see the Solr Admin screen as below :
Solr Admin Console for Collection 1
To check if Solr has been configured and set up correctly, select collection1 in the left hand side bar and select the query option. Enter 'solr' in the textbox labelled 'q' as shown in the screenshot below.

Default Search Interface
On submitting the query, one result is returned by Solr. On clicking on the result link, the record is displayed as an XML.
Search Result for 'Solr'


In the next post, I'll document the steps of integrating Solr with an existing web-application which is running on the same Tomcat instance with a MySQL back-end.

Tuesday 2 April 2013

Creating an executable jar with a Maven plugin

While there are several non Maven based ways of creating an executable Jar that includes all required dependencies, Maven as a build tool offers three plugins for directly achieving this result. The Maven Assembly plugin is the first of these options. Creating an executable jar is just one of the objectives of the Assembly plugin. It can be used to create distributions in various file formats such as the zip format.
To configure the Maven Assembly plugin to create an executable jar with all required dependencies included, add the following configuration definition to your pom.xml in the build section.



   ..........
   
   maven-assembly-plugin
   2.4
   
     false
         standalone-${artifactId}-${version}
     
       jar-with-dependencies
     
     
        
           com.data.john.MainClass
           true
        
     
   
     
        
           make-assembly 
           package
           
              single
           
        
    
 
.............
.............


The second option is the Maven Shade plugin. The primary objective of this plugin is to create an executable jar and so it is much more suited for this use case. The configuration setup is as follows:



.................   

    org.apache.maven.plugins
    maven-shade-plugin
    2.0
    
        
             package
            
                 shade
            
            
            standalone-${artifactId}-${version}
            
              
              *:*
              
               META-INF/*.SF
               META-INF/*.DSA
               META-INF/*.RSA
              
             
            
            
                 
                    com.data.masker.controller.MaskerApp
                
            
           
        
    

..............




While tossing up between the Assembly and the Shade plugin, It should be mentioned that with using Maven Assembly, you run the risk of overwriting files with similar namespaces. These scenarios can be better handled using the Shade plugin, that provides more granular control over the build process by merging files with the same name instead of overwriting them.
Finally, this discussion would not be complete without mentioning the Maven Jar plugin, yet another way of  creating an executable jar with some use case examples given here.
All three options can be invoked using the mvn package call. 

Friday 29 March 2013

Generating JavaDoc with Eclipse or Maven

Generating JavaDocs is an integral part of every Java project and there are various options of generating the required documentation.

Option 1 :  Using Eclipse and JDK's JavaDoc tool.

This is the simplest option. If you are using Eclipse and have the path to the JDK (not JRE) set in the Eclipse > Windows Preferences > Installed JRE option set correctly (as shown in the screenshot below), Eclipse should be able to find the JavaDoc tool.
Set the path to your installed JDK

JavaDoc Tool in Eclipse
That's it. Clicking on the Javadoc option will bring up a wizard that will prompt you for the Destination folder for the generated JavaDocs. Apart from generating the JavaDocs, the tool will also create a stylesheet.css for the JavaDocs that can be edited if required.

Option 2 :  Using Maven and the JavaDoc plugin.

Add the plugin definition to your pom.xml
               
                
                     ...........
                  
                  org.apache.maven.plugins
                  maven-javadoc-plugin
                  2.9
                  
                    C:/javadoc/stylesheet.css
                     public
                  
                  
                
                  


In the snippet above, I have added the plugin definition and configuration to the build section. It can also be repeated in the section. This will allow me to run the Javadoc generation goal during the build cycle. I have also specified a path to a stylesheet file that will define the look and feel of the JavaDoc generated.

To generate the JavaDoc using Maven, use mvn javadoc:javadoc

More details on this option is available here.




Thursday 28 March 2013

Guava : Using Java Collections the Google Way

Recently Guava came out with release 14.0.1 which include a number of bug fixes and enhancements. As a utility library, Guava provides an impressive list of extensions and tools for using Java collections with an aim towards improving code readability, transparently managing Java collections and introducing useful checks and frequently used utility methods that simplify the generation of quality code.
With respect to Java Collections, Guava provides utility methods for creating Immutable collections, and provides some new Collections. If just looking for implementing various checks on your Java Collections, there are a number of Collection Utilities that allow you to create an equivalent Java Collection while writing code that is cleaner and easier to read.

For example, constructing a generic collection before Java 7 would amount to the following line of code:
  • List list = new ArrayList();

In Guava, this could be simplied to  List list = Lists.newArrayList();
And this approach could be further extended to include the List size and initialise the List as follows:
  • List list = Lists.newArrayList(100); // create a list with a 100 elements.
  • List list = Lists.newArrayList("red","blue","green"); // create and initialise a list with some values
Similarly Guava has many utility methods for dealing with nulls in Collections, managing String and performing operations on the Java Math library that is not available in the JDK. If using the Apache Commons Library, Guava is definitely worth a check out !!

Monday 11 March 2013

ZK 6.0 and the MVVM pattern

Previously I wrote about my experience in using ZK, an event driven UI framework that simplifies UI development for J2EE applications. Currently, the latest release of ZK is 6.5.1.2 which came out in December, 2012. The transition from version 5 to 6 was significant as it included a number of new features, including an upgrade in the jQuery library from 1.4 to 1.6, the use of Advanced templates, new controls and finally most notably among all these changes, the ability to use a new data binding approach called the ZK Bind which was based on the MVVM pattern.
While the MVVM pattern is not a new concept, its introduction in ZK is certainly timely and serves to simplify the development and maintainability of the user interface code. Since ZK is an event-driven framework, it has component which generate events and these in turn need to be handled by the controller which in turn lead to interactions with the model. In ZK 5, the model needed to know about the different components, which were generating events. In ZK 6.0, this layer of interaction has been made transparent  via an implementation of the MVVM pattern, which uses annotions in the ZUL and the model class to tie the components, their events and the model objects all together.
This example is a very simple introduction to the basic approach of using MVVM in ZK. As evident, in the ViewModel: HelloViewModel.java, there is no need for the model to know about the different components it needs to interact with as was the case in ZK 5. The annotations in the ZUL page and the model class wire the components, the events generated and the changes in the model all together without the need for any Java  code to be written.
While applying the annotations and the new approach in building UIs, might take some getting used to, there is no doubt that the implementation of the MVVM pattern using the Binder (ZK Bind) simplifies the UI development and code maintenance. It also provides a cleaner separation of the user interface from the business objects.

Saturday 2 March 2013

Being Agile : Should I be in a Scrum or work on a Kanban?

After having worked in the Scrum philosophy for several projects, I recently got the opportunity to experience the Kanban way of being Agile. While there are several common between the two flavours of Agile, the overall focus of the two modes is what is the biggest differentiators. In Agile, the focus is on the quantity of work that is delivered while in Kanban, the focus is more on quality.Kanban limits the work in progress with a goal of reducing multi-tasking and maximising output. When working in a Kanban system, the term Flow of Value describes how current work can give maximum returns.Measuring the output of Kanban teams is done in terms of an established Cadence that recognises the ability of a team to deliver high quality work consistently.

The following table gives some comparison points between the Scrum and the Kanban philosophies.

Scrum Kanban
Focus is on quantity of work delivered in a sprint. Focus is on quality of work delivered in a phase or cycle.
There are no Iterations and the need for Estimations is not high.
New stories can be started while others are in progress. Emphasis is given on completing in-progress stories with a priority for fixing bugs that come up in existing  stories.
Team members have clearly defined roles,
such as Scum-Master, Product Owner.
No such roles are maintained.
Different meetings such as Pre-planning meetings and Retrospectives have significant bearing on the sprints Meetings are called as and when required.

After 6 months of employing the Kanban approach, there were some immediate gains visible. Since there were no sprint failures, the morale of the team got a boost which in turn led to greater productivity. Since the WIP limits were strictly adhered to, there was some buffer time which gave rise to the opportunity to focus on creating more unit tests, upgrading to recent versions of developer tools or helping reduce bottle necks in am existing Kanban. While the idea of downing tools and not starting anything new might sound incredulous to a lot of people, slide 9 in this presentation comparing Kanban to Scrum gives a good guide on managing idle team members. Obviously the Kanban approach requires more discipline and trust among team members but if applied in the correct way, can help achieve some immediate goals.

Sunday 10 February 2013

Testing Private methods using Reflection

Without entering into a debate of why or why not private methods should be directly tested, here are the various steps involved in testing a private method using Java Reflection.

Scenario 1 : The private method has no input parameters, operates on a private variable and returns a value.

The class to be tested.

package com.john.exeriment;

/**
 * The class to be tested.
 *
 */
public class App
{
private String name= "John";
private int numCharactersInName = 0;
 
public int getNumCharactersInName() {
return numCharactersInName;
}
private int getSizeOfName()
{
numCharactersInName = name.length();
return numCharactersInName;
}
public static void main( String[] args )
 {
App app = new App();

System.out.println(" The size of the name is " + app.getSizeOfName());

   }
}
-------------
JUNIT Class

package com.john.exeriment;

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import junit.framework.Test;
import junit.framework.TestCase;
import junit.framework.TestSuite;

/**
 * Unit test for simple App.
 */
public class AppTest extends TestCase
{
    /**
     * Create the test case
     *
     * @param testName name of the test case
     */
    public AppTest( String testName )
    {
        super( testName );
    }

    /**
     * @return the suite of tests being tested
     */
    public static Test suite()
    {
        return new TestSuite( AppTest.class );
    }

/**
* Test method operates on the private field using its default value
**/
public void testAppMethod() throws SecurityException, NoSuchMethodException, IllegalArgumentException, IllegalAccessException, InvocationTargetException, NoSuchFieldException
    {
    App myApp = new App();
        Method m = myApp.getClass().getDeclaredMethod("getSizeOfName", null);
        m.setAccessible(true);
        Integer size = (Integer) m.invoke(myApp, null);

    assertTrue( "Size should be ", size == myApp.getNumCharactersInName());
    }

We assume the method getNumCharactersInName() has been tested.


/**
* Test case modifies the private field of the class using Reflections before testing the required private method.
**/
public void testAppField() throws SecurityException, NoSuchFieldException, IllegalArgumentException, IllegalAccessException, NoSuchMethodException, InvocationTargetException
    {
    App anotherApp = new App();
    Field field = anotherApp.getClass().getDeclaredField("name");
    field.setAccessible(true);
    field.set(anotherApp, "Testing!!!");
   
    Method m = anotherApp.getClass().getDeclaredMethod("getSizeOfName", null);
         m.setAccessible(true);
         Integer size = (Integer) m.invoke(anotherApp, null);
   
    assertEquals( "Size should be", 10,  size, 0.000001 );
    }

}



Scenario 2 : The private method takes a String as an input parameter and does not return a value.

We add the following method to our class.


private void getSizeOfName(String name)
{
numCharactersInName = name.length();
}


And to test it, we have the following JUNIT test.

public void testAppMethodWithInputParam() throws SecurityException, NoSuchMethodException, IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
    App myApp = new App();
   
    // The parameter type
    Class[] parameterTypes = new Class[1];
        parameterTypes[0] = String.class;
        
        // The parameter value
        Object[] parameters = new Object[1];
        parameters[0] = "Cricket";
        
        Method m = myApp.getClass().getDeclaredMethod("getSizeOfName", parameterTypes);
        m.setAccessible(true);
        m.invoke(myApp, parameters);

    assertEquals( "Size is greater than 0", 7, myApp.getNumCharactersInName());
  }


In the test, we set up a String variable which we initialise to the value Cricket, and test if the method initialises the numCharactersInField correctly to 7 characters.

Thus, testing private methods via Reflection is a straightforward process and should be used if there are key parts of the business logic embedded in private methods.

Monday 21 January 2013

Parsing Strings in PL/SQL


Consider a scenario where you need to parse a VARCHAR2 value which is delimited by a special character such as a comma. Rather than parse the entire input and search for the start and end of each sub-string, use the apex_util.string_to_table() that is available in Oracle 10G and 11G

A typical usage example would be as follows:

DECLARE

variable_array_to_hold_values apex_application_global.vc_arr2;
variable_test VARCHAR2(100) := 'show:me:a:parser';

 BEGIN

    variable_array_to_hold_values := apex_util.string_to_table(variable_test, ':');
     FOR i IN 1..variable_array_to_hold_values.COUNT
     LOOP
         DBMS_OUTPUT.PUT_LINE(' Value is ' || variable_array_to_hold_values(i));
     END LOOP;
 END;
 /

And the result would be
 Value is show
 Value is me
 Value is a
 Value is parser

Thursday 17 January 2013

Apache Commons Lang Library : Managing Strings without NullPointers

The Apache Commons Lang 3.0 package is a useful set of utilities and interface definitions for String and Character manipulation. With respect to String manipulation, the StringUtils, StringEscapeUtils, RandomStringUtils, Tokenizer, WordUtils classes provide a number of utility methods for manipulating and managing String.

The StringUtils class has some interesting functions such as 
  • IsEmpty/IsBlank - checks if a String contains text. The main difference between IsEmpty and IsBlank is that that IsBlank also checks for whitespaces while IsEmpty doesn't. 
  • Trim/Strip - removes leading and trailing whitespace
  • Equals - compares two strings null-safe
  • startsWith - check if a String starts with a prefix null-safe
  • endsWith - check if a String ends with a suffix null-safe
  • IndexOf/LastIndexOf/Contains - null-safe index-of checks
  • IndexOfAny/LastIndexOfAny/IndexOfAnyBut/LastIndexOfAnyBut - index-of any of a set of Strings
  • ContainsOnly/ContainsNone/ContainsAny - does String contains only/none/any of these characters
  • Substring/Left/Right/Mid - null-safe substring extractions
  • SubstringBefore/SubstringAfter/SubstringBetween - substring extraction relative to other strings
  • Split/Join - splits a String into an array of substrings and vice versa
While these functions may appear similar to the ones provided by the String package itself, the StringUtils versions are null safe thus eliminating the possiblity of those horrible NPEs. It should be noted that a null input will return null without throwing any exception.

Saturday 12 January 2013

The Kogan Agora 10 inch Tablet : A user review

Recently purchased a two month old Kogan Agora Tablet powered by a 1, GHz single-core ARM Cortex A8, running an Android 4.0.4 ICS. It has 16 GB storage expandable to 32 GB, a sleek 10 inch screen with a 1024x768  LCD display, weighs 600gms  and is about 12 mm thick. Priced at less than $200 ( I got it for less than $150 off eBay), it is an excellent entry level device into the world of tablet computing. 
My requirements were modest at best. I needed a tablet that could play some kiddie games for my 2 year old, browse the net and handle some rough treatment at the hands of my 2 year old. Suffice to say the tablet so far meets all my requirements.
On the plus side, it has a sleek look, is light and easy to hold and use. The pinch and zoom feature that has been maligned in several reviews, worked well for me. Simple games like Talking Tom work well and give an indication of how well the microphone and the speakers work. 
On the negative side, the twin cameras produce disappointing results as compared to the photos taken by the Samsung tablets and multi-tasking is definitely not one of the tablet's strong features. Even loading two sites at the same time can make the device slow and unresponsive but with the latest models offering a dual core processor this problem could well be a thing of the past.
Even with its relative slow performance, if I went back and evaluated it against my original requirement of acquiring an entry level tablet device which is fully functional, it is a great buy and yes I won't feel bad at all if my 2 year old accidentally breaks it.
Finally, a handy tip, get yourself a leather case and a screen protector if you intend to share the device with your kids and if still unsure have a read of this review  and perhaps this one (for more technical comparisons) from June 2012, which benchmarks the Tablet running Android 4.0.3 ICS (the previous version to the one I have installed).