Intellij idea 本地开发Hadoop调试

Intellij idea 本地开发调试hadoop的方法

使用软件的版本信息

Intellij idea版本: 2016.3.2

hadoop版本: 2.6.5

数据集: Hadoop: The Definitive Guide
(使用《hadoop权威指南》的1901和1902的两年的天气统计的源码实例)

附上hadoop调试并测试的源码

hadooptestMapper

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
package cn.zcs.zzuli;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

/**
* Created by 张超帅 on 2018/6/23.
*/
public class hadooptestMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
private static final int MISSING = 9999;

@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
String year = line.substring(15, 19);
int airTemperature;
if (line.charAt(87) == '+') { // parseInt doesn't like leading plus signs
airTemperature = Integer.parseInt(line.substring(88, 92));
} else {
airTemperature = Integer.parseInt(line.substring(87, 92));
}
String quality = line.substring(92, 93);
if (airTemperature != MISSING && quality.matches("[01459]")) {
context.write(new Text(year), new IntWritable(airTemperature));
}
}
}

hadooptestReducer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
package cn.zcs.zzuli;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

/**
* Created by 张超帅 on 2018/6/23.
*/
public class hadooptestReducer extends Reducer<Text, IntWritable, Text, IntWritable>{
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int maxValue = Integer.MIN_VALUE;
for(IntWritable value : values) {
maxValue = Math.max(maxValue, value.get());
}
context.write(key, new IntWritable(maxValue));
}
}

hadooptest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
package cn.zcs.zzuli;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.File;

/**
* Created by 张超帅 on 2018/6/23.
*/
public class hadooptest {
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.err.println("Usage: MaxTemperature <input path> <output path>");
System.exit(-1);
}
Job job = Job.getInstance(new Configuration());
job.setJarByClass(hadooptest.class);
job.setJobName("Max temperature");

File inputdir = new File(args[0]);
File[] files = inputdir.listFiles();
for (File file : files) {
FileInputFormat.addInputPath(job, new Path(file.toString()));
}


FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.setMapperClass(hadooptestMapper.class);
job.setReducerClass(hadooptestReducer.class);

job.setOutputKeyClass(Text.class); //注1
job.setOutputValueClass(IntWritable.class);

System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

建立项目 (File->new->project)

  • 创建包->在包里建立如上三个类

  • 项目结构如下图所示:

  • 这是已经 配置好的环境,所以没有报错。但开发前是需要配置的。

hadoop开发配置

  • File->Project Structure->Modules
  • 点击加号->JARs or dectories

  • 在Project Structure 中找到Artifacts点击。
  • 点击加号添加JARs

  • 点击加号->module output->OK;

  • 完成

  • Edit Configurations

  • 在Edit Configurations 界面点击加号->Application

  • 编辑各个参数信息

- (1) main class需要输入org.apache.hadoop.util.RunJar
- (2) program arguments,填写参数如上图所示:
    - 第一个参数之前在project structure中填写的jar文件路径,第二个参数是输入文件的目录,第二个参数是输出文件的路径

- (3) 在项目中新建一个输入路径并将输入文件放进去(输出文件不用建立,系统自己建立的)

FAQ

找不到类

  • 发现刚才填写参数的地方着了一个参数,需要将main函数所在类的路径添加进去:

eclipse 中运行 Hadoop2.xx map reduce程序 出现错误(null) entry in command string: null chmod 0700

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:609)
at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:125)
at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163)
at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at com.jack.hadoop.temperature.MinTemperature.main(MinTemperature.java:37)

Windows下Hadoop的输入文件的目录问题(出现权限问题)

  • 解决办法:
    • 在配置Progeam argument 时需要写具体的输入文件路径
    • 还可以使用通配符匹配的方法,例如:{input/*}
    • 在windows环境下可以写个方法获取目录下的所有文件。
1
2
3
4
5
File inputdir = new File(args[0]);
File[] files = inputdir.listFiles();
for (File file : files) {
FileInputFormat.addInputPath(job, new Path(file.toString()));
}

Hadoop报错:Failed to locate the winutils binary in the hadoop binary path

  • 这个也是文件目录问题,当然这个也是可以被忽略的错误。

在IDEA中的Hadoop使用Maven配置后,在控制台未打印日志输出信息。

  • 这个日志可以在调试的控制台显示,但在直接弹出来的命令控制台未显示hadoop的日志输出信息。这个问题可以使用本地的Hadoop源文件,也就是引用本地的Hadoop源文件。

Windows下的 XXXX.tar.gz文件怎么解压

  • 搜索7-zip下载并使用。