Tuesday, March 17, 2015

FileCrush replacement format

Error message:
-bash: --replacement=$1-${crush.timestamp}-${crush.task.num}-${crush.file.num}: bad substitution

Fix by using escape character in bash:
--replacement=\$1-\$\{crush.timestamp\}-\$\{crush.task.num\}-\$\{crush.file.num\}

Wednesday, March 4, 2015

Parse hosts file in Fabric

Add the following code to your fabfile.py

The "hosts_file" support comment and host sequence:
$ cat hosts_file
# This is comment. using # asprefix
hadoopdev01
hadoopprod[01-10]
#hadoopdev02
192.168.1.[1-200]

Wednesday, November 5, 2014

User sqoop2 cannot submit applications to queue root.sqoop2

Error messages:
sqoop:000> start job --jid 2
2014-11-05 12:49:31 WIB: FAILURE_ON_SUBMIT
Exception: java.io.IOException: Failed to run job : User sqoop2 cannot submit applications to queue root.sqoop2
Stack trace: java.io.IOException: Failed to run job : User sqoop2 cannot submit applications to queue root.sqoop2
        at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:437)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
        at org.apache.sqoop.submission.mapreduce.MapreduceSubmissionEngine.submit(MapreduceSubmissionEngine.java:247)
        at org.apache.sqoop.framework.JobManager.submit(JobManager.java:418)
        at org.apache.sqoop.handler.SubmissionRequestHandler.submissionSubmit(SubmissionRequestHandler.java:152)
        at org.apache.sqoop.handler.SubmissionRequestHandler.handleActionEvent(SubmissionRequestHandler.java:122)
        at org.apache.sqoop.handler.SubmissionRequestHandler.handleEvent(SubmissionRequestHandler.java:75)
        at org.apache.sqoop.server.v1.SubmissionServlet.handlePostRequest(SubmissionServlet.java:44)
        at org.apache.sqoop.server.SqoopProtocolServlet.doPost(SqoopProtocolServlet.java:63)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:643)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:745)

Fix:
CM -> yarn configuration -> Fair Scheduler Allocations
Add sqoop2 user to allow it to submit job:

Tuesday, October 21, 2014

Add Service HIVE Error on CDH5 Wizard

Command failed to run because service Hive has invalid configuration. Review and correct its configuration. First error: 'Hive Metastore Database Host' is required when using database type 'postgresql'

Fix: Goto Hive Service Configuration. Search "Hive Metastore Database Host" fill it with postgres server hostname.

Tuesday, September 9, 2014

Forqlift direct writing to HDFS

1. Combine your existing core-site.xml and hdfs-site.xml to $FORQLIFT_HOME/conf/core-site.xml. Beware of <configuration> tag, it's must be single.
2. Copy following libs from your existing hadoop to $FORQLIFT_HOME/lib.base :
- avro-*.jar
- guava-*.jar
- hadoop-auth-*.jar
- hadoop-common-*.jar
- hadoop-hdfs-*.jar
- protobuf-java-*.jar

Credit to @papaAta

Friday, October 11, 2013

MySQL Workbench can't store connection passwords

This is workaround if you have issue like this. Add hardcoded password in each connection in ~/.mysql/workbench/connections.xml

<value type="string" key="password">yoursecretpassword</value>