• Home
  • Archive
  • Tools
  • Contact Us

The Customize Windows

Technology Journal

  • Cloud Computing
  • Computer
  • Digital Photography
  • Windows 7
  • Archive
  • Cloud Computing
  • Virtualization
  • Computer and Internet
  • Digital Photography
  • Android
  • Sysadmin
  • Electronics
  • Big Data
  • Virtualization
  • Downloads
  • Web Development
  • Apple
  • Android
Advertisement
You are here: Home » Installing Local Data Lake on Ubuntu Server : Part 1

By Abhishek Ghosh March 6, 2017 1:31 pm Updated on March 6, 2017

Installing Local Data Lake on Ubuntu Server : Part 1

Advertisement

In previous guides, we have covered some important basic installation and setup guide for the major known Big Data softwares. Here is Part 1 of Installing Local Data Lake on Ubuntu Server With Hadoop, Spark, Thriftserver, Jupyter etc To Build a Prediction System. We suggest to use servers from VPSDime as they cost very low – $7 per month for 6GM RAM. We talked about some limitations of OpenVZ virtualization. VPSDime is great for test setups unless you are breaking their rules. 12GB is minimum need of RAM. Our older guides went towards analysis of data like log files as one path. Prediction software system is another path. We will use Ubuntu server as most user can use.

I can not give warranty about the version number related typo. At worst, currently WordPress has been just bad with hundreds of funky features and configs becomes odd if wrongly switched from Text to Visual.

Installing Local Data Lake on Ubuntu Server Part 1

 

Installing Local Data Lake on Ubuntu Server : What is Data Lake?

 

Data lake is a method of storing data within a system to facilitate the collocation of data in various schemata and structural forms for various tasks like reporting, visualization, analytics and machine learning. Apache Hadoop distributed file system itself is example data lake.

Advertisement

---

 

Installing Local Data Lake on Ubuntu Server

 

Please follow our previous guides to install the needed components :

  1. Install Hadoop (configure exactly in that way)
  2. Install Spark

Also you need to create OpenSSL cert :

Vim
1
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout cert.pem -out cert.pem

So, till this step, we already have configured ssh, added user and installed some softwares. Now install python with some packages like textblob, sklearn, jupyter notebook which you can use to test :

Vim
1
2
3
4
5
6
7
8
9
10
11
apt install python-pip
apt install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose
pip install textblob
python -m textblob.download_corpora
sudo pip install --upgrade ipython
sudo pip install jupyter
sudo apt-get install libsasl2-dev
sudo pip install sasl
sudo pip install pyhs2
# Jupyter
git clone http://github.com/nasdag/pyspark

If you run this :

Vim
1
ipython

You’ll get :

Vim
1
2
3
In [1]: from IPython.lib import passwd
In [2]: passwd()
Enter password:

Complete the steps. Next configure :

Vim
1
2
3
4
5
jupyter notebook --generate-config
mkdir -p ~/tutorials
cd ~/tutorials
git clone http://github.com/nasdag/pyspark
nano ~/.jupyter/jupyter_notebook_config.py

I have a sample configuration file as gist, you should fork or copy-paste and edit it.

You should use domain name and follow our guide to install Let’s Encrypt SSL certificate. But, at this state, you can go to :

Vim
1
https://host_ip_address:4334/pyspark/

Now install MySQL from Repo :

Vim
1
2
apt install mysql-server
apt install libmysql-java

Now, we will install Apache Hive, latest version of which you’ll get on :

Vim
1
https://hive.apache.org/downloads.html

apache-hive-2.1.1 was latest in my case. So these are commands :

Vim
1
2
wget http://www-eu.apache.org/dist/hive/hive-2.1.1/apache-hive-2.1.1-bin.tar.gz
tar -zxvf apache-hive-2.1.1-bin.tar.gz

Basically I forgot the path of sample MySQL (it was different before), it should be like :

Vim
1
/metastore/scripts/upgrade/mysql/

Vim
1
https://github.com/apache/hive/tree/master/metastore/scripts/upgrade/mysql

I am roughly saying the steps of configuring Hive. You have to go to that ../../upgrade/mysql/ and run these :

Vim
1
2
3
4
5
6
7
8
9
mysql -u root -p
Enter password:
CREATE DATABASE metastore;
USE metastore;
SOURCE hive-schema-1.2.0.mysql.sql;
CREATE USER 'hiveuser'@'%' IDENTIFIED BY 'hivepassword';
GRANT all on *.* to 'hiveuser'@localhost identified by 'hivepassword';
flush privileges;
exit;

You’ll get detailed step on official website. Now you have to install Scala and Maven :

Vim
1
2
http://scala-lang.org/
https://maven.apache.org/download.cgi

This is example of configuring :

Vim
1
2
3
4
5
6
7
wget http://downloads.lightbend.com/scala/2.12.1/scala-2.12.1.tgz
sudo tar -xzf scala-2.12.1.tgz -C /usr/local/share
rm scala-scala-2.12.1.tgz
wget http://www-eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
sudo tar -xzf apache-maven-3.3.9-bin.tar.gz -C /usr/local/share
sudo mv /usr/local/share/apache-maven-3.3.9 /usr/local/share/maven-3.3.9
rm apache-maven-3.3.9-bin.tar.gz

You need to edit :

Vim
1
/usr/local/share/hadoop-x.y.z/etc/hadoop/core-site.xml

to this way :

Vim
1
2
3
4
5
6
7
8
9
10
11
12
13
<configuration>
 
<property>
<name>hadoop.tmp.dir</name>
<value>/var/local/hadoop/tmp</value>
</property>
 
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
 
</configuration>

Edit :

Vim
1
/usr/local/share/hadoop-x.y.z/etc/hadoop/mapred-site.xml

to this way :

Vim
1
2
3
4
5
6
7
8
<configuration>
 
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
</property>
 
</configuration>

Edit :

Vim
1
/usr/local/share/hadoop-x.y.z/etc/hadoop/hdfs-site.xml

to :

Vim
1
2
3
4
5
6
7
8
<configuration>
 
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
 
</configuration>

Edit :

Vim
1
/usr/local/share/hadoop-x.y.z/etc/hadoop/hadoop-env.sh

to :

Vim
1
export JAVA_HOME=/usr/lib/jvm/java-7-oracle

Edit :

Vim
1
nano ~/.bashrc

to :

Vim
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
export JAVA_HOME=/usr/lib/jvm/java-7-oracle
export SCALA_HOME=/usr/local/share/scala-x.y.z
export MAVEN_HOME=/usr/local/share/maven-x.y.z
export PATH=$PATH:$PATH:$MAVEN_HOME/bin:$SCALA_HOME/bin:/home/nasdag/idea-IC/bin/
export IBUS_ENABLE_SYNC_MODE=1
export HADOOP_HOME=/usr/local/share/hadoop-x.y.z
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"
export SPARK_HOME=/usr/local/share/spark-x.y.z
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
export HADOOP_USER_CLASSPATH_FIRST=true
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop/
export PYSPARK_SUBMIT_ARGS="--packages com.databricks:spark-csv_2.11:1.1.0 pyspark-shell"
export PATH=$PATH:$PATH:/home/nasdag/zeppelin/bin

Edit :

Vim
1
/usr/local/share/spark-x.y.z/conf/hive-site.xml

to :

Vim
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<configuration>
   <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
      <description>metadata is stored in a MySQL server</description>
   </property>
   <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
      <description>MySQL JDBC driver class</description>
   </property>
   <property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>hiveuser</value>
      <description>user name for connecting to mysql server</description>
   </property>
   <property>
      <name>javax.jdo.option.ConnectionPassword</name>
      <value>hivepassword</value>
      <description>password for connecting to mysql server</description>
   </property>
</configuration>

Perform :

Vim
1
sudo mkdir -p /usr/local/share/spark-x.y.z/logs; sudo chmod 777 /usr/local/share/spark-x.y.z/logs

Edit :

Vim
1
/usr/local/share/spark-x.y.z/conf/spark-defaults.conf

to :

Vim
1
2
spark.driver.extraClassPath        /usr/share/java/mysql-connector-java.jar
spark.master                       local[2]

Edit :

Vim
1
nano ~/.ipython/profile_default/startup/initspark.py

to :

Vim
1
2
3
import sys
sys.path.append('/usr/local/share/spark-x.y.z/python/')
sys.path.append('/usr/local/share/spark-x.y.z/python/lib/py4j-x.y.z-src.zip')

Install Zeppelin, IntelliJ IDEA :

Vim
1
2
3
4
5
6
7
cd ~
git clone http://github.com/apache/incubator-zeppelin
mv incubator-zeppelin zeppelin
cd zeppelin
export MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=128m"
mvn install -DskipTests -Dspark.version=1.5.2 -Dhadoop.version=2.6.2
nano zeppelin/conf/zeppelin-env.sh

Add these kind of line :

Vim
1
2
export SPARK_HOME=/usr/local/share/spark-1.5.2
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.11:1.1.0 --jars /usr/share/java/mysql-connector-java.jar"

Run:

Vim
1
2
3
4
wget https://download.jetbrains.com/idea/ideaIC-15.0.2.tar.gz
tar -xzf ideaIC-15.0.2.tar.gz -C ~
mv ~/idea-IC-143.1184.17 ~/idea-IC
rm ideaIC-15.0.2.tar.gz

Start all the services :

Vim
1
2
3
4
start-dfs.sh
start-thriftserver.sh
start-dfs.sh
zeppelin-daemon.sh start

Now you can visit http://ip_address:8080/.

Tagged With data lake on linux , install data lake in local , openssl req -new -key in ubuntu , paperuri:(bdd8dddcf1e322db99ca9487bb6a8190) , ubuntu data lake , ubuntu on windows local data

This Article Has Been Shared 560 Times!

Facebook Twitter Pinterest

Abhishek Ghosh

About Abhishek Ghosh

Abhishek Ghosh is a Businessman, Surgeon, Author and Blogger. You can keep touch with him on Twitter - @AbhishekCTRL.

Here’s what we’ve got for you which might like :

Articles Related to Installing Local Data Lake on Ubuntu Server : Part 1

  • What is Data Lake in Big Data?

    A data lake comprises of multiple repositories providing data to an organisation for analytical processing including analytics & reporting.

  • Apache Spark Alternatives To Overcome Integrity Issues

    Apache Spark Has Problems Including Need Of Dependencies & Integrity. Here Is List Of Apache Spark Alternatives To Overcome Integrity Issues.

  • Building Big Data Analytics Solutions In The Cloud With Tools From IBM

    We Can Plan Building Big Data Analytics Solutions In The Cloud With Tools From IBM For Cost Reduction, Simplicity & Using Advanced Features.

  • How to Install Apache BigTop on Ubuntu 16.04

    Apache Bigtop is a Big Data management distribution. Here are the SSH Commands Showing How to Install Apache BigTop on Ubuntu 16.04.

  • Theoretical Foundations of Big Data : Part 3

    Theoretical Foundations of Big Data is third and final part of our series of articles. We have talked about Data Mining, OLAP & softwares.

Additionally, performing a search on this website can help you. Also, we have YouTube Videos.

Take The Conversation Further ...

We'd love to know your thoughts on this article.
Meet the Author over on Twitter to join the conversation right now!

If you want to Advertise on our Article or want a Sponsored Article, you are invited to Contact us.

Contact Us

Subscribe To Our Free Newsletter

Get new posts by email:

Please Confirm the Subscription When Approval Email Will Arrive in Your Email Inbox as Second Step.

Search this website…

 

Popular Articles

Our Homepage is best place to find popular articles!

Here Are Some Good to Read Articles :

  • Cloud Computing Service Models
  • What is Cloud Computing?
  • Cloud Computing and Social Networks in Mobile Space
  • ARM Processor Architecture
  • What Camera Mode to Choose
  • Indispensable MySQL queries for custom fields in WordPress
  • Windows 7 Speech Recognition Scripting Related Tutorials

Social Networks

  • Pinterest (24.3K Followers)
  • Twitter (5.8k Followers)
  • Facebook (5.7k Followers)
  • LinkedIn (3.7k Followers)
  • YouTube (1.3k Followers)
  • GitHub (Repository)
  • GitHub (Gists)
Looking to publish sponsored article on our website?

Contact us

Recent Posts

  • What is Voice User Interface (VUI) January 31, 2023
  • Proxy Server: Design Pattern in Programming January 30, 2023
  • Cyberpunk Aesthetics: What’s in it Special January 27, 2023
  • How to Do Electrical Layout Plan for Adding Smart Switches January 26, 2023
  • What is a Data Mesh? January 25, 2023

About This Article

Cite this article as: Abhishek Ghosh, "Installing Local Data Lake on Ubuntu Server : Part 1," in The Customize Windows, March 6, 2017, January 31, 2023, https://thecustomizewindows.com/2017/03/installing-local-data-lake-ubuntu-server-part-1/.

Source:The Customize Windows, JiMA.in

PC users can consult Corrine Chorney for Security.

Want to know more about us? Read Notability and Mentions & Our Setup.

Copyright © 2023 - The Customize Windows | dESIGNed by The Customize Windows

Copyright  · Privacy Policy  · Advertising Policy  · Terms of Service  · Refund Policy

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Do not sell my personal information.
Cookie SettingsAccept
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT