public class GoogleDataSegmentPusher extends Object implements DataSegmentPusher
JOINER
Constructor and Description |
---|
GoogleDataSegmentPusher(GoogleStorage storage,
GoogleAccountConfig config,
com.fasterxml.jackson.databind.ObjectMapper jsonMapper) |
Modifier and Type | Method and Description |
---|---|
File |
createDescriptorFile(com.fasterxml.jackson.databind.ObjectMapper jsonMapper,
DataSegment segment) |
List<String> |
getAllowedPropertyPrefixesForHadoop()
Property prefixes that should be added to the "allowedHadoopPrefix" config for passing down to Hadoop jobs.
|
String |
getPathForHadoop() |
String |
getPathForHadoop(String dataSource)
Deprecated.
|
void |
insert(File file,
String contentType,
String path,
boolean replaceExisting) |
Map<String,Object> |
makeLoadSpec(URI finalIndexZipFilePath) |
DataSegment |
push(File indexFilesDir,
DataSegment segment,
boolean replaceExisting)
Pushes index files and segment descriptor to deep storage.
|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getDefaultStorageDir, getStorageDir, makeIndexPathName
@Inject public GoogleDataSegmentPusher(GoogleStorage storage, GoogleAccountConfig config, com.fasterxml.jackson.databind.ObjectMapper jsonMapper)
@Deprecated public String getPathForHadoop(String dataSource)
getPathForHadoop
in interface DataSegmentPusher
public String getPathForHadoop()
getPathForHadoop
in interface DataSegmentPusher
public List<String> getAllowedPropertyPrefixesForHadoop()
DataSegmentPusher
getAllowedPropertyPrefixesForHadoop
in interface DataSegmentPusher
public File createDescriptorFile(com.fasterxml.jackson.databind.ObjectMapper jsonMapper, DataSegment segment) throws IOException
IOException
public void insert(File file, String contentType, String path, boolean replaceExisting) throws IOException
IOException
public DataSegment push(File indexFilesDir, DataSegment segment, boolean replaceExisting) throws IOException
DataSegmentPusher
push
in interface DataSegmentPusher
indexFilesDir
- directory containing index filessegment
- segment descriptorreplaceExisting
- overwrites existing objects if true, else leaves existing objects unchanged on conflict.
The behavior of the indexer determines whether this should be true or false. For example,
since Tranquility does not guarantee that replica tasks will generate indexes with the same
data, the first segment pushed should be favored since otherwise multiple historicals may
load segments with the same identifier but different contents which is a bad situation. On
the other hand, indexers that maintain exactly-once semantics by storing checkpoint data can
lose or repeat data if it fails to write a segment because it already exists and overwriting
is not permitted. This situation can occur if a task fails after pushing to deep storage but
before writing to the metadata storage, see: https://github.com/druid-io/druid/issues/5161.
If replaceExisting is true, existing objects MUST be overwritten, since failure to do so
will break exactly-once semantics. If replaceExisting is false, existing objects SHOULD be
prioritized but it is acceptable if they are overwritten (deep storages may be eventually
consistent or otherwise unable to support transactional writes).IOException
public Map<String,Object> makeLoadSpec(URI finalIndexZipFilePath)
makeLoadSpec
in interface DataSegmentPusher
Copyright © 2011–2018. All rights reserved.