Sanjay Sharma’s Weblog

February 13, 2014

Hurray! My book on Cassandra Design Patterns is out!!

My attempt in summarizing my experiences with using Cassandra for real world problems is out.

Cassandra Design Patterns – is a concise collection of real world Cassandra USAGE and DESIGN patterns.

One of the key decisions while writing the book was whether to use some design pattern template and I finally decided to use a traditional design pattern definition template on the lines of famous GoF design patterns. This approach seemed aligned to the definition of Software Design Patterns on Wikipedia – “In software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design.”

Hopefully, strict design TRADITIONALIST would forgive me for using this template in a new paradigm of covering Cassandra Usage, Applicability, Architecture, Data modeling and Applied Design patterns along with Design patterns.

The book starts with capturing details on how Cassandra is useful for solving POST-RDBMS era challenges using  the well known 3V model in the big data world. The patterns are aptly modeled on the 3Vs – velocity, variety and volume and would be a good read for people from RDBMS world to identify if they should start looking beyond the boundaries of their RDBMS world. Interestingly, these patterns are solved by almost all popular NoSQL solutions and hence not limited only to Cassandra.

Next, Cassandra’s unique differentiators are modeled into patterns that tell us how Cassandra can solve some of the most challenging problems in the data base world today.

Real world business problems are seldom solved using a single technology stack and hence the next chapter in the book covers some usage and design patterns of combining Cassandra with some popular big data technologies like Hadoop, Solr/Elastic Search etc.

The subsequent chapters continue the journey with some known patterns and anti patterns that can be used to solve real world problems including some listed under additional interesting patterns that are based on some new upcoming features in Cassandra.

Hopefully this concise book will enable data architects, solution architects, Cassandra developers and experts alike as a helpful tool and a guide for using the power of Cassandra in the right way.

I do promise to keenly listen to community reviews/suggestions  and continue improving on enhancing this list of  Cassandra Use case/Design patterns.

August 5, 2013

Global Big Data Conference Hyderabad-2Aug2013- Finance/Manufacturing Use Cases

Filed under: Uncategorized — Tags: , , , , , , , — indoos @ 6:30 pm

June 19, 2013

NYC Cassandra – March 2013 – Lightning talk

Filed under: Uncategorized — Tags: , , , — indoos @ 9:06 pm

November 1, 2012

Big Data Technologies Landscape

Filed under: Cassandra, Cloud, Hadoop, Hive, NoSQL — Tags: , , , , , — indoos @ 2:24 pm

July 30, 2011

The Cloud Is the Way to Go | Cloud Computing Journal

Filed under: Cloud, Hadoop — Tags: , — indoos @ 8:25 am

The Cloud Is the Way to Go | Cloud Computing Journal.

August 16, 2010

Hadoop Ecosystem World-Map

While preparing for the keynote for the  recently held HUG India meetup on 31st July, I decided that I will try to keep my session short, but useful and relevant to the lined up sesssions on hiho, JAQL and Visual hive. I have always been a keen student of geography (still take pride in it!) and thought it would be great to draw a visual geographical map of Hadoop ecosystem. Here is what I came up with a little nice story behind it-

  1. How did it all start- huge data on the web!
  2. Nutch built to crawl this web data
  3. Huge data had to saved- HDFS was born!
  4. How to use this data?
  5. Map reduce framework built for coding and running analytics – java, any language-streaming/pipes
  6. How to get in unstructured data – Web logs, Click streams, Apache logs, Server logs  – fuse,webdav, chukwa, flume, Scribe
  7. Hiho and sqoop for loading data into HDFS – RDBMS can join the Hadoop band wagon!
  8. High level interfaces required over low level map reduce programming– Pig, Hive, Jaql
  9. BI tools with advanced UI reporting- drilldown etc- Intellicus 
  10. Workflow tools over Map-Reduce processes and High level languages
  11. Monitor and manage hadoop, run jobs/hive, view HDFS – high level view- Hue, karmasphere, eclipse plugin, cacti, ganglia
  12. Support frameworks- Avro (Serialization), Zookeeper (Coordination)
  13. More High level interfaces/uses- Mahout, Elastic map Reduce
  14. OLTP- also possible – Hbase

Would love to hear feedback about this and how to grow it further to add the missing parts!

Hadoop ecosystem map

July 26, 2010

Next Hadoop India User Group Meetup – July 2010

I am pretty excited and looking forward to attend the next HUG meetup on 31st July 2010 in Noida. I really hope to see energetic Indian Hadoop-ers discuss about whats happening in Indian Hadoop community as well as the rest of  the world. 

I guess, I may have been the culprit behind the delay, else we would have the event at least 2-3 months earlier. Will now try to have similar event  more frequently. Have already thoughts around planning for one around NoSQL databases again one of my favourites as a technology of the future. Unlike last time in Nov 2009, a group of young Impros- Absolute Zero forum is organizing the event and sparing me lots of pain:). Offcourse, nothing could have been possible without iLabs and Impetus‘ support pushing us to participate in open source community as much as possible. 

The HUG event this time will have some interesting sessions. Sonal Goyal would be taking about ‘hiho’- an open source solution for bridging the gap between the RDBMS world and Hadoop. As I foresee it, all software based business including SME would like to ride the band wagon of using BI and consumer analytics to enhance business and Hadoop is going to enable that in a cost-effective way. RDBMS would continue to be used for real-time applications since these are time-tested and essentially do not face serious competition (not yet!) from the new age NoSQL databases. So the demand of tools for bringing RDBMS data into Hadoop analytics systems is going to be hot!   ‘hiho’ and sqoop are the two top contenders in this category. Hopefully Sonal would be able to share with us the power of hiho as well as pros/cons over sqoop.

JAQL talk from Himanshu, IBM would again be interesting to know that people are trying out different approaches than map-reduce java/streaming coding and traditional PIG and Hive high level interfaces. The challenge for Himanshu would be to help us understand how JAQL is better than Hive or PIG. 

Sajal would be talking about Hive + Intellicus- a window to the unstoppable future of Hadoop in DW and BI. 

I have always been more biased towards Hive as SQL and java usually go hand in hand in almost all business applications. So it would be interesting to know how Hadoop through Hive is slowly becoming ready for enterprise applications and providing a Visual Interface for data analytics. It seems, at last, Hadoop is ready to come out of developer-only-world to enter the domain of business user$.

July 11, 2010

Hive BI analytics: Visual Reporting

Filed under: Hadoop, Hive, HPC, Java world — Tags: , , , , , , , , — indoos @ 5:23 pm

I had earlier written about using Hive as a data source for BI tools using industry proven BI reporting tools and here is a list of the various official announcements from Pentaho, Talend. Microstrategy and Intellicus –

The topic is close to my heart since I firmly believe that while Hadoop and Hive are true large data analytics tool, their power is currently limited to use by software programmers. The advent of BI tools in Hadoop/Hive world would certainly bring it closer to the real end users – business users.

I am currently not too sure how these BI reporting tools are deciding how much part of  the analytics be left in Map reduce and how much in the reporting tool itself- guess it will take time to find the right balance. Chances are that  I will find it a bit earlier than others as I am working closely  (read here) with Intellicus team to get the changes in Hive JDBC driver for Intellicus’ interoperability with Hive.

June 24, 2010

Webinar details – Large data and compute HPC offerings in Impetus

Filed under: HPC — Tags: , , , , , , , — indoos @ 2:28 pm

Hive remote debugging

Filed under: Hadoop, Hive — Tags: , — indoos @ 2:40 am

Recently spent some time looking under Hive hood while working with my colleague Sunil on HIVE-1346 in Hive JDBC implementation.

Figured out it is not very easy to debug the code, so here is a useful script we used to enable remote debugging in hive. We used Eclipse remote debugging with Hadoop 0.20.1 running in standalone method with Hive 0.5.0. 

Please do remember to remove the extra lines that I had to add for formatting the script. Also, a better job can be done by using something like ‘for’ loop for getting all lib jars from Hadoop and Hive lib directory. 

export HADOOP_HOME=/home/hadoop/hadoop-0.20.1
export HIVE_HOME=/home/hadoop/hive-0.5.0-bin
export JAVA_HOME=/usr/lib/jvm/java-6-sun
export HIVE_LIB=$HIVE_HOME/lib
export HIVE_CLASSPATH=$HIVE_HOME/conf:$HIVE_LIB/antlr-runtime-3.0.1.jar:$HIVE_LIB/asm-3.1.jar:

export HADOOP_LIB=$HADOOP_HOME/bin/../lib

export HADOOP_CLASSPATH=$HADOOP_HOME/bin/../conf:$JAVA_HOME/lib/tools.jar:

export DEBUG_INFO="-Xmx1000m -Xdebug -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=8001,server=y,suspend=n"
$JAVA_HOME/bin/java $DEBUG_INFO -classpath $CLASSPATH -Dhadoop.log.dir=$HADOOP_HOME/bin/../logs
-Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=$HADOOP_HOME/bin/.. -Dhadoop.root.logger=INFO,console -Djava.library.path=$HADOOP_LIB/native/Linux-i386-32
-Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.util.RunJar $HIVE_LIB/hive-service-0.5.0.jar

$JAVA_HOME/bin/java -Xmx1000m $DEBUG_INFO -classpath $CLASSPATH -Dhadoop.log.dir=$HADOOP_HOME/bin/../logs
-Dhadoop.home.dir=$HADOOP_HOME/bin/.. -Dhadoop.root.logger=INFO,console
-Djava.library.path=$HADOOP_LIB/native/Linux-i386-32 -Dhadoop.policy.file=hadoop-policy.xml
org.apache.hadoop.util.RunJar $HIVE_LIB/hive-cli-0.5.0.jar org.apache.hadoop.hive.cli.CliDriver
Older Posts »

Blog at