November 1, 2012
August 16, 2010
While preparing for the keynote for the recently held HUG India meetup on 31st July, I decided that I will try to keep my session short, but useful and relevant to the lined up sesssions on hiho, JAQL and Visual hive. I have always been a keen student of geography (still take pride in it!) and thought it would be great to draw a visual geographical map of Hadoop ecosystem. Here is what I came up with a little nice story behind it-
- How did it all start- huge data on the web!
- Nutch built to crawl this web data
- Huge data had to saved- HDFS was born!
- How to use this data?
- Map reduce framework built for coding and running analytics – java, any language-streaming/pipes
- How to get in unstructured data – Web logs, Click streams, Apache logs, Server logs – fuse,webdav, chukwa, flume, Scribe
- Hiho and sqoop for loading data into HDFS – RDBMS can join the Hadoop band wagon!
- High level interfaces required over low level map reduce programming– Pig, Hive, Jaql
- BI tools with advanced UI reporting- drilldown etc- Intellicus
- Workflow tools over Map-Reduce processes and High level languages
- Monitor and manage hadoop, run jobs/hive, view HDFS – high level view- Hue, karmasphere, eclipse plugin, cacti, ganglia
- Support frameworks- Avro (Serialization), Zookeeper (Coordination)
- More High level interfaces/uses- Mahout, Elastic map Reduce
- OLTP- also possible – Hbase
Would love to hear feedback about this and how to grow it further to add the missing parts!
July 26, 2010
I am pretty excited and looking forward to attend the next HUG meetup on 31st July 2010 in Noida. I really hope to see energetic Indian Hadoop-ers discuss about whats happening in Indian Hadoop community as well as the rest of the world.
I guess, I may have been the culprit behind the delay, else we would have the event at least 2-3 months earlier. Will now try to have similar event more frequently. Have already thoughts around planning for one around NoSQL databases again one of my favourites as a technology of the future. Unlike last time in Nov 2009, a group of young Impros- Absolute Zero forum is organizing the event and sparing me lots of pain:). Offcourse, nothing could have been possible without iLabs and Impetus‘ support pushing us to participate in open source community as much as possible.
The HUG event this time will have some interesting sessions. Sonal Goyal would be taking about ’hiho’- an open source solution for bridging the gap between the RDBMS world and Hadoop. As I foresee it, all software based business including SME would like to ride the band wagon of using BI and consumer analytics to enhance business and Hadoop is going to enable that in a cost-effective way. RDBMS would continue to be used for real-time applications since these are time-tested and essentially do not face serious competition (not yet!) from the new age NoSQL databases. So the demand of tools for bringing RDBMS data into Hadoop analytics systems is going to be hot! ‘hiho’ and sqoop are the two top contenders in this category. Hopefully Sonal would be able to share with us the power of hiho as well as pros/cons over sqoop.
JAQL talk from Himanshu, IBM would again be interesting to know that people are trying out different approaches than map-reduce java/streaming coding and traditional PIG and Hive high level interfaces. The challenge for Himanshu would be to help us understand how JAQL is better than Hive or PIG.
Sajal would be talking about Hive + Intellicus- a window to the unstoppable future of Hadoop in DW and BI.
I have always been more biased towards Hive as SQL and java usually go hand in hand in almost all business applications. So it would be interesting to know how Hadoop through Hive is slowly becoming ready for enterprise applications and providing a Visual Interface for data analytics. It seems, at last, Hadoop is ready to come out of developer-only-world to enter the domain of business user$.