Sanjay Sharma’s Weblog

August 16, 2010

Hadoop Ecosystem World-Map

While preparing for the keynote for the  recently held HUG India meetup on 31st July, I decided that I will try to keep my session short, but useful and relevant to the lined up sesssions on hiho, JAQL and Visual hive. I have always been a keen student of geography (still take pride in it!) and thought it would be great to draw a visual geographical map of Hadoop ecosystem. Here is what I came up with a little nice story behind it-

  1. How did it all start- huge data on the web!
  2. Nutch built to crawl this web data
  3. Huge data had to saved- HDFS was born!
  4. How to use this data?
  5. Map reduce framework built for coding and running analytics – java, any language-streaming/pipes
  6. How to get in unstructured data – Web logs, Click streams, Apache logs, Server logs  – fuse,webdav, chukwa, flume, Scribe
  7. Hiho and sqoop for loading data into HDFS – RDBMS can join the Hadoop band wagon!
  8. High level interfaces required over low level map reduce programming– Pig, Hive, Jaql
  9. BI tools with advanced UI reporting- drilldown etc- Intellicus 
  10. Workflow tools over Map-Reduce processes and High level languages
  11. Monitor and manage hadoop, run jobs/hive, view HDFS – high level view- Hue, karmasphere, eclipse plugin, cacti, ganglia
  12. Support frameworks- Avro (Serialization), Zookeeper (Coordination)
  13. More High level interfaces/uses- Mahout, Elastic map Reduce
  14. OLTP- also possible – Hbase

Would love to hear feedback about this and how to grow it further to add the missing parts!

Hadoop ecosystem map

February 8, 2010

BI with MapReduce

Filed under: Advanced computing, Hadoop — Tags: , , , , , — indoos @ 2:12 pm

Have any of you used map reduce in the context of business intelligence?

While collating my thoughts on this Linked-in Hadoop discussion, found out that I needed more visuals to explain it first to myself :).

So, here are the many ways in which Hadoop MapReduce does offer an alternative in the big-big BI world-

Scenario 1: Use Hadoop and Hive as interface to BI tools. Pentaho reporting is already supported as of Hive 0.4.0.

Scenario 2: Use Hadoop for intial data polishing, and then dump to a SQL supported column based database near-real BI reporting. Aster data/Vertica /Greenplum sell themselves by advertising  MapReduce connectors (or similar) heavily. The cost of SQL supported column based database is the only pain point here (+ the risk on how these actually scale vs what these promise)

Scenario 3: Use Hadoop for intial data polishing, and then dump to a SQL supported column based database near-real BI reporting. In case of Real time reporting, data can further be BI polished from column based databases to a fast regular RDBMS with BI support.

 

Scenario 4: The free way:)- Use Hadoop for intial data polishing, and then dump to a regular SQL database with BI support. The export from HDFS can be the Un-sqoop way. The onus would more be on the developer to dump only ready-for-report data (lesser) with most of the BI already completed as part of More MR step.

The important fact to note is that there might be additional costs on moving the major chunk of  BI data analysis part to programmatic interfaces (SQL or MR).  

I am not too much of a database-fallen-in-love type, so do like the way Hive can emerge as a potential BI reporting tool.

Create a free website or blog at WordPress.com.