public class DatasourceInputFormat extends org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,InputRow>
| Modifier and Type | Field and Description | 
|---|---|
static String | 
CONF_DRUID_SCHEMA  | 
static String | 
CONF_INPUT_SEGMENTS  | 
static String | 
CONF_MAX_SPLIT_SIZE  | 
static String | 
CONF_TRANSFORM_SPEC  | 
| Constructor and Description | 
|---|
DatasourceInputFormat()  | 
| Modifier and Type | Method and Description | 
|---|---|
org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.NullWritable,InputRow> | 
createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
                  org.apache.hadoop.mapreduce.TaskAttemptContext context)  | 
List<org.apache.hadoop.mapreduce.InputSplit> | 
getSplits(org.apache.hadoop.mapreduce.JobContext context)  | 
public static final String CONF_INPUT_SEGMENTS
public static final String CONF_DRUID_SCHEMA
public static final String CONF_TRANSFORM_SPEC
public static final String CONF_MAX_SPLIT_SIZE
public List<org.apache.hadoop.mapreduce.InputSplit> getSplits(org.apache.hadoop.mapreduce.JobContext context) throws IOException, InterruptedException
getSplits in class org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,InputRow>IOExceptionInterruptedExceptionpublic org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.NullWritable,InputRow> createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) throws IOException, InterruptedException
createRecordReader in class org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,InputRow>IOExceptionInterruptedExceptionCopyright © 2011–2018. All rights reserved.