The Azure Data Lake Store is a cloud repository where you can easily store data of any size or any type. It is the Hadoop Distributed File System for the cloud and available on-demad. Data stored in Data Lake Store is easily accessible to Azure Data Lake Analytics and Azure HDInsight. It will be possible to integrate it with other Hadoop distributions and projects like Hortonworks , Cloudera, spark, strom and flume.
Below are the steps to create Azure Data Late Store and manage it using Azure Portal and Azure CLI.
What is an Enterprise Data Lake?
Way back in 2010, Pentaho co-founder and CTO, James Dixon coined the term ‘Data Lake’. While these days, there exist many interpretations of the term, usually it means a repository that holds a vast amount of raw data in its native format until it is needed. Raw data at its most granular level is stored so that any ad-hoc analysis can be performed at any time.
I am using HDP for windows (184.108.40.206) single node and Eclipse as development environment. Below are few samples to read and write to HDFS.
- Create a new Java Project in Eclipse.
- In Java Settings go to Libraries and add External JARs. Browse to Hadoop installation folder and add below JAR file.Hadoop-core.jar
- Go into lib folder and add below JAR files.common-configuration-1.6.jar