今天想通过一些数据,来测试一下我的《基于信息熵的无字典分词算法》这篇文章的正确性。就写了一下MapReduce程序从MSSQL SERVER2008数据库里取数据分析。程序发布到hadoop机器上运行报SQLEXCEPTION错误

 mapreduce导出MSSQL的数据到HDFS

  奇怪了,我的SQL语句中没有LIMIT,这LIMIT哪来的。我翻看了DBInputFormat类的源码,

 1 protected RecordReader<LongWritable, T> createDBRecordReader(DBInputSplit split,
 2 
 3       Configuration conf) throws IOException {
 4 
 5  
 6 
 7     @SuppressWarnings("unchecked")
 8 
 9     Class<T> inputClass = (Class<T>) (dbConf.getInputClass());
10 
11     try {
12 
13       // use database product name to determine appropriate record reader.
14 
15       if (dbProductName.startsWith("ORACLE")) {
16 
17         // use Oracle-specific db reader.
18 
19         return new OracleDBRecordReader<T>(split, inputClass,
20 
21             conf, createConnection(), getDBConf(), conditions, fieldNames,
22 
23             tableName);
24 
25       } else if (dbProductName.startsWith("MYSQL")) {
26 
27         // use MySQL-specific db reader.
28 
29         return new MySQLDBRecordReader<T>(split, inputClass,
30 
31             conf, createConnection(), getDBConf(), conditions, fieldNames,
32 
33             tableName);
34 
35       } else {
36 
37         // Generic reader.
38 
39         return new DBRecordReader<T>(split, inputClass,
40 
41             conf, createConnection(), getDBConf(), conditions, fieldNames,
42 
43             tableName);
44 
45       }
46 
47     } catch (SQLException ex) {
48 
49       throw new IOException(ex.getMessage());
50 
51     }
52 
53   }
View Code

相关文章:

  • 2021-11-30
  • 2022-12-23
  • 2022-12-23
  • 2021-09-24
  • 2021-09-19
  • 2022-12-23
  • 2022-12-23
猜你喜欢
  • 2021-12-04
  • 2022-12-23
  • 2021-08-02
  • 2021-12-14
  • 2021-12-07
  • 2021-06-24
相关资源
相似解决方案