| Bucket | 
  | 
| DetermineHashedPartitionsJob | 
 Determines appropriate ShardSpecs for a job by determining approximate cardinality of data set using HyperLogLog 
 | 
| DetermineHashedPartitionsJob.DetermineCardinalityMapper | 
  | 
| DetermineHashedPartitionsJob.DetermineCardinalityReducer | 
  | 
| DetermineHashedPartitionsJob.DetermineHashedPartitionsPartitioner | 
  | 
| DeterminePartitionsJob | 
Determines appropriate ShardSpecs for a job by determining whether or not partitioning is necessary, and if so,
 choosing the best dimension that satisfies the criteria:
  
 
 Must have exactly one value per row.
 Must not generate oversized partitions.  
 | 
| DeterminePartitionsJob.DeterminePartitionsDimSelectionAssumeGroupedMapper | 
 This DimSelection mapper runs on raw input data that we assume has already been grouped. 
 | 
| DeterminePartitionsJob.DeterminePartitionsDimSelectionCombiner | 
  | 
| DeterminePartitionsJob.DeterminePartitionsDimSelectionMapperHelper | 
 Since we have two slightly different DimSelectionMappers, this class encapsulates the shared logic for
 emitting dimension value counts. 
 | 
| DeterminePartitionsJob.DeterminePartitionsDimSelectionOutputFormat | 
  | 
| DeterminePartitionsJob.DeterminePartitionsDimSelectionPartitioner | 
  | 
| DeterminePartitionsJob.DeterminePartitionsDimSelectionPostGroupByMapper | 
 This DimSelection mapper runs on data generated by our GroupBy job. 
 | 
| DeterminePartitionsJob.DeterminePartitionsDimSelectionReducer | 
  | 
| DeterminePartitionsJob.DeterminePartitionsGroupByMapper | 
  | 
| DeterminePartitionsJob.DeterminePartitionsGroupByReducer | 
  | 
| HadoopDruidDetermineConfigurationJob | 
  | 
| HadoopDruidIndexerConfig | 
  | 
| HadoopDruidIndexerJob | 
  | 
| HadoopDruidIndexerMapper<KEYOUT,VALUEOUT> | 
  | 
| HadoopIngestionSpec | 
  | 
| HadoopIOConfig | 
  | 
| HadoopKerberosConfig | 
  | 
| HadoopTuningConfig | 
  | 
| HadoopWorkingDirCleaner | 
 Used by ResetCluster to delete the Hadoop Working Path. 
 | 
| HadoopyShardSpec | 
 ShardSpec + a shard ID that is unique across this run. 
 | 
| HadoopyStringInputRowParser | 
  | 
| IndexGeneratorJob | 
  | 
| IndexGeneratorJob.IndexGeneratorCombiner | 
  | 
| IndexGeneratorJob.IndexGeneratorMapper | 
  | 
| IndexGeneratorJob.IndexGeneratorOutputFormat | 
  | 
| IndexGeneratorJob.IndexGeneratorPartitioner | 
  | 
| IndexGeneratorJob.IndexGeneratorReducer | 
  | 
| IndexGeneratorJob.IndexGeneratorStats | 
  | 
| IndexingHadoopModule | 
  | 
| InputRowSerde | 
  | 
| InputRowSerde.DoubleIndexSerdeTypeHelper | 
  | 
| InputRowSerde.FloatIndexSerdeTypeHelper | 
  | 
| InputRowSerde.LongIndexSerdeTypeHelper | 
  | 
| InputRowSerde.StringIndexSerdeTypeHelper | 
  | 
| JobHelper | 
  | 
| MetadataStorageUpdaterJob | 
  | 
| SortableBytes | 
  | 
| SortableBytes.SortableBytesGroupingComparator | 
  | 
| SortableBytes.SortableBytesPartitioner | 
  | 
| SortableBytes.SortableBytesSortingComparator | 
  | 
| SQLMetadataStorageUpdaterJobHandler | 
  | 
| TaskLocation | 
  | 
| TaskStatusPlus | 
  | 
| Utils | 
  |