Package | Description |
---|---|
io.druid.indexer |
Modifier and Type | Method and Description |
---|---|
HadoopyShardSpec |
HadoopDruidIndexerConfig.getShardSpec(Bucket bucket) |
Modifier and Type | Method and Description |
---|---|
Map<org.joda.time.DateTime,List<HadoopyShardSpec>> |
HadoopTuningConfig.getShardSpecs() |
Modifier and Type | Method and Description |
---|---|
void |
HadoopDruidIndexerConfig.setShardSpecs(Map<org.joda.time.DateTime,List<HadoopyShardSpec>> shardSpecs) |
HadoopTuningConfig |
HadoopTuningConfig.withShardSpecs(Map<org.joda.time.DateTime,List<HadoopyShardSpec>> specs) |
Constructor and Description |
---|
HadoopIngestionSpec(DataSchema dataSchema,
HadoopIOConfig ioConfig,
HadoopTuningConfig tuningConfig,
String dataSource,
io.druid.data.input.impl.TimestampSpec timestampSpec,
io.druid.data.input.impl.DataSpec dataSpec,
GranularitySpec granularitySpec,
Map<String,Object> pathSpec,
String workingPath,
String segmentOutputPath,
String version,
PartitionsSpec partitionsSpec,
boolean leaveIntermediate,
Boolean cleanupOnFailure,
Map<org.joda.time.DateTime,List<HadoopyShardSpec>> shardSpecs,
boolean overwriteFiles,
DataRollupSpec rollupSpec,
DbUpdaterJobSpec updaterJobSpec,
boolean ignoreInvalidRows,
Map<String,String> jobProperties,
boolean combineText,
String timestampColumn,
String timestampFormat,
List<org.joda.time.Interval> intervals,
com.metamx.common.Granularity segmentGranularity,
String partitionDimension,
Long targetPartitionSize) |
HadoopTuningConfig(String workingPath,
String version,
PartitionsSpec partitionsSpec,
Map<org.joda.time.DateTime,List<HadoopyShardSpec>> shardSpecs,
Integer rowFlushBoundary,
boolean leaveIntermediate,
Boolean cleanupOnFailure,
boolean overwriteFiles,
boolean ignoreInvalidRows,
Map<String,String> jobProperties,
boolean combineText) |
Copyright © 2015. All rights reserved.