Monday, September 12, 2016

Amazon EMR: running Custom Jar with input and output from S3

Leave a Comment

I am trying to run an EMR cluster which has a custom jar step. The program takes input from S3 and outputs to S3 (or at least this is what I want to accomplish). In the step configuration, I have the following in the arguments field:

v3.MaxTemperatureDriver s3n://hadoopbook/ncdc/all s3n://hadoop-szhu/max-temp 

where hadoopbook/ncdc/all is the path to the bucket containing the input data (as a side note, the example I'm running is from this book), and hadoop-szhu is my own bucket where I want to store the output. Following this post, my MapReduce driver looks like this:

package v3;  import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner;  import v1.MaxTemperatureReducer;  public class MaxTemperatureDriver extends Configured implements Tool {    @Override   public int run(String[] args) throws Exception {     if (args.length != 2) {       System.err.printf("Usage: %s [generic options] <input> <output>\n",           getClass().getSimpleName());       ToolRunner.printGenericCommandUsage(System.err);       return -1;     }      Job job = new Job(getConf(), "Max temperature");     job.setJarByClass(getClass());      FileInputFormat.addInputPath(job, new Path(args[0]));     FileOutputFormat.setOutputPath(job, new Path(args[1]));      job.setMapperClass(MaxTemperatureMapper.class);     job.setCombinerClass(MaxTemperatureReducer.class);     job.setReducerClass(MaxTemperatureReducer.class);      job.setOutputKeyClass(Text.class);     job.setOutputValueClass(IntWritable.class);      return job.waitForCompletion(true) ? 0 : 1;   }    public static void main(String[] args) throws Exception {     int exitCode = ToolRunner.run(new MaxTemperatureDriver(), args);     System.exit(exitCode);   } } 

However, when I try to run this, I get the following error:

Exception in thread "main" java.io.IOException: No FileSystem for scheme: s3n 

I've also tried to copy the data from s3 to the cluster using the following (run after sshing into the master node):

hadoop distcp \   -Dfs.s3n.awsAccessKeyId='...' \   -Dfs.s3n.awsSecretAccessKey='...' \   s3n://hadoopbook/ncdc/all input/ncdc/all 

But I get a bunch of errors, I've included an excerpt below:

2016-09-03 07:07:11,858 FATAL [IPC Server handler 6 on 43495] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1472884232220_0001_m_000000_0 - exited : java.io.IOException: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'     at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:224)     at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)     at java.security.AccessController.doPrivileged(Native Method)     at javax.security.auth.Subject.doAs(Subject.java:422)     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'     ... 10 more Caused by: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'     at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)     at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.getFileStatus(EmrFileSystem.java:511)     at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219)     ... 9 more 

I'm not sure where the issue lies, but I would be happy to include more details (please comment below). Thanks!

1 Answers

Answers 1

s3n:// is the old protocol, you should instead be using s3://

Reference: http://docs.aws.amazon.com//ElasticMapReduce/latest/ManagementGuide/emr-plan-file-systems.html

If You Enjoyed This, Take 5 Seconds To Share It

0 comments:

Post a Comment