Sanjay Sharma’s Weblog

July 11, 2010

Hive BI analytics: Visual Reporting

Filed under: Hadoop, Hive, HPC, Java world — Tags: , , , , , , , , — indoos @ 5:23 pm

I had earlier written about using Hive as a data source for BI tools using industry proven BI reporting tools and here is a list of the various official announcements from Pentaho, Talend. Microstrategy and Intellicus –

The topic is close to my heart since I firmly believe that while Hadoop and Hive are true large data analytics tool, their power is currently limited to use by software programmers. The advent of BI tools in Hadoop/Hive world would certainly bring it closer to the real end users – business users.

I am currently not too sure how these BI reporting tools are deciding how much part of  the analytics be left in Map reduce and how much in the reporting tool itself- guess it will take time to find the right balance. Chances are that  I will find it a bit earlier than others as I am working closely  (read here) with Intellicus team to get the changes in Hive JDBC driver for Intellicus’ interoperability with Hive.

Advertisements

July 2, 2010

kundera- making life easy for Apache Cassandra users

Filed under: Cassandra, HPC, Java world, NoSQL — Tags: , , , , , — indoos @ 4:54 am

One of my colleagues Animesh has been working on creating an Annotation based wrapper over Cassandra and we have finally decided to open source it for it to be nurtured as a part of the bigger community.

kundera is hosted on code.google and can be reached here – http://code.google.com/p/kundera/

Here is how to get started with kundera in 5 minutes –http://anismiles.wordpress.com/2010/06/30/kundera-knight-in-the-shining-armor/

The logic behind kundera is quite simple – provide ORM like wrapper over the difficult-to-use Thrift APIs. Eventually all NoSQL databases would like to have similar APIs so that it is easy to use NoSQL databases.

The initial release includes a JPA LIKE annotation library. The roadmap is to subsequently change it a Cassandra specific JPA extension. The other important feature that would be added is index/search using Lucandra/Solandra.

August 27, 2009

Hadoop- some revelations

Filed under: Advanced computing, Java world, Tech — Tags: , , — indoos @ 5:44 am

My recent experience with using Hadoop in production grade applications was both good and bad.

Here are some of the bad ones to start with-

  • Using commodity servers – not entirely true as even expressed on Hadoop web site somewhere. Anything below 8 GB RAM may not help with any good production heavy application, particularly if each Map/Reduce task uses 1-2 GB of RAM
    • Task tracker and data node JVM instances take at least around 1 GB RAM each- effectively leaving 5-6 GB RAM for Map Reduce JVMs
    • 512 MB for each Map and Reduce JVMs leaves with 5-8 Maps +3-6 Reduce instances
  • Usually real-time applications use look up or metadata data.  Although, Hadoop does offer Distributed cache or Configuration based (pseudo) replication of small shared data, the very nature of heavy Java in-memory object handling (serialization-dese) and HDFS access, does not allow performant look up handling
  • I would love to see more/easier/default control on various settings/parameters in config files as the current mechanism is really a pain in the back
  • Hadoop uses a lot of temp space. It is easy to NOT notice that you may only use 1/4 of your total available hard disk memory for business use. This is because you use 2 parts for replication (3 is default n good replication factor) while 1 for temporary (working/intermittent) processing. So for processing say 1 TB data, use may require around 4 TB+ hard disk. I learned about this the hard way after wasting good precious time!!
  • Last but not the least- it is real easy to write Map Reduce using Hadoop genius framework, but real difficult to convert business logic to Map Reduce paradigm

To be continued ……………….

May 29, 2009

Setting Hadoop 0.20.0 cluster with a windows slave

Filed under: Advanced computing, Java world, Tech — Tags: , , , , — indoos @ 8:46 am

Here are the steps for with setting up a Hadoop cluster and pluging-in a windows machine as a slave-

a. First setup a psuedo-hadoop on a linux machine as explained in http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)

I was able to use this excellent tutorial with minor changes to get psuedo-hadoop cluster running on a Centos/Ubuntu and a Windows machine.

I used a common user hadoop created at all machines

b. Next step was to get all the psuedo-machines work together as a real cluster.  Again http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster) was a easy reckoner to get it working

Some easy tips to get hadoop working in cluster mode are

  1. Use machine names everywhere instead of IP address and change /etc/hosts at all machines
  2. Configure the setup at the master machine  i.e. the conf xml including masters and slave files as well as /etc/hosts and copy all these conf files and entries in /etc/hosts file to the slave nodes
  3. The same copying thingy helps for authorized_keys file where we enter all public keys from each slave to master  machine’s  authorized_keys and then copying this authorized_keys file to all slaves.
  4. set JAVA_HOME is each installations hadoop-config.sh file. I had some issues with setting it in .profile and still getting some JAVA_HOME problems
  5. An other easy option is to create a gzip of your master hadoop install and copy it for setup in slave nodes

c. So now for the windows bit-

  1. install cygwin if already not done that
  2. check if you have sshd server installed in cygwin setup- if not, install it
  3. Double check if you a service CYGWIN sshd running under windows services
  4. create a hadoop user by-
cygwin> net user hadoop password /add /yes
cygwin> mkpasswd -l -u hadoop >> /etc/passwd
cygwin> chown hadoop -R /home/hadoop

d. Treat windows machine as *nix-

  1. Now use Putty to login to your local windows machine using the newly created hadoop user
  2. Setup hadoop as you would do for any Linux machine- easy option is to copy paster master hadoop installation
  3. Do not forget to setup .ssh files and copying the pub key in master authorized_key file and copying back that authorized_key to this windows machine. Also do add JAVA_HOME in hadoop-config.sh file which should be a /cygdrive/<path to java6>  entry

e. Assuming that the master server is already running, run this slave using “bin/hadoop-daemon.sh start datanode” or “bin/hadoop-daemon.sh start tasktracker” to run datanode or task tracker instances.

Next, will write about how I managed to get Hive-0.30 release working with Hadoop 0.20.0 on my small Hadoop cluster with 3 Linux machines and 1 windows machine

May 28, 2009

A must read for opponents of Code Quality and TDD

Filed under: Code quality, Java world — indoos @ 8:21 am

All test-driven development (TDD) and pair programming (PP) opponents- here is something real straight and easy  to understand-

http://anarchycreek.com/2009/05/26/how-tdd-and-pairing-increase-production/

April 9, 2009

GAE+Groovlets – local+remote with Eclipse plugin

Filed under: Java world — Tags: , , — indoos @ 8:19 am

After trying GAE for Java using core GAE SDK, went ahead to try Grails+GAE- sorry doesn’t work yet.

However, Groovy+GAE does work as explained in little tutorial. However, only production env works while development doesn’t 😉

Local deployment does not work due to groovy.security.GroovyCodeSourcePermission /groovy/shell) problem

Started trying Google Plugin for Eclipse got Groovlets+GAE working on local as well as remote environment.

Here are the steps-

GAE+Groovy+Eclipse

GAE+Groovy+Eclipse

  • Changed build.groovy file to use war folder instead of deploy folder {webinf = “war/WEB-INF” instead of webinf = “deploy/WEB-INF”}
  • Changed /.settings/com.google.appengine.eclipse.core.prefs to include groovy-all-1.6.1.jar in filesCopiedToWebInfLib

#Thu Apr 09 10:24:45 IST 2009
eclipse.preferences.version=1
filesCopiedToWebInfLib=appengine-api-1.0-sdk-1.2.0.jar|datanucleus-appengine-1.0.0.final.jar|datanucleus-core-1.1.0.jar|datanucleus-jpa-1.1.0.jar|geronimo-jpa_3.0_spec-1.1.1.jar|geronimo-jta_1.1_spec-1.1.1.jar|jdo2-api-2.3-SNAPSHOT.jar|groovy-all-1.6.1.jar|

  • The project can now be run locally using Run As >> Web Application without any groovy permission issues
  • The project can be deployed to Remote GAE using the cute little Deploy button provided by Google Eclipse Plugin {the button below Eclipse Menu bar-> Project menu in the above image}

December 11, 2008

Love at first bite- GROOVY

Filed under: Java world — Tags: , , , — indoos @ 5:44 am

While looking at Rails on Ruby some time back, I was enticed by its mean clean way of creating fast data driven web sites. Being a hard-core JAVA-ite, I know the LABOR PAINS pains for achieving similar in Java world of JSF, Struts e.t.c.

The first SIGHT of GROOVY aka GRAILS- I was enticed

The first BITE of GROOVY aka GRAILS- I was in Love!!!!

So I have a Rails Clone powered by  Java- deadly combo!!!!

The first few weeks were truly amazing as I tried my hands on a new project. Fast UI development,  magical Ajax support, convention over configuration MIRCHY was what I was wanting for so long.

Some weeks later, as I and Grails settle down together, I am getting aware of our weaknesses (in both me and Grails/Groovy).  It is not that bad yet and with Big B  Java as heavenly God Father covering up the setbacks, it has been good so far.

I am not too concerned about Grails/Groovy being slow(not sure though whether that is true). Why- because Groovy heart is actually Made in JAVA and  I will know what to pull where to get it beating faster.

Will keep posted on whether this LOVE lasts for ever.

Create a free website or blog at WordPress.com.