public class CassandraDataSegmentPusher extends CassandraStorage implements DataSegmentPusher
| Constructor and Description | 
|---|
CassandraDataSegmentPusher(CassandraDataSegmentConfig config,
                          com.fasterxml.jackson.databind.ObjectMapper jsonMapper)  | 
| Modifier and Type | Method and Description | 
|---|---|
String | 
getPathForHadoop()  | 
String | 
getPathForHadoop(String dataSource)
Deprecated.  
 | 
Map<String,Object> | 
makeLoadSpec(URI uri)  | 
DataSegment | 
push(File indexFilesDir,
    DataSegment segment,
    boolean replaceExisting)
Pushes index files and segment descriptor to deep storage. 
 | 
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitgetAllowedPropertyPrefixesForHadoop, getDefaultStorageDir, getStorageDir, makeIndexPathName@Inject public CassandraDataSegmentPusher(CassandraDataSegmentConfig config, com.fasterxml.jackson.databind.ObjectMapper jsonMapper)
public String getPathForHadoop()
getPathForHadoop in interface DataSegmentPusher@Deprecated public String getPathForHadoop(String dataSource)
getPathForHadoop in interface DataSegmentPusherpublic DataSegment push(File indexFilesDir, DataSegment segment, boolean replaceExisting) throws IOException
DataSegmentPusherpush in interface DataSegmentPusherindexFilesDir - directory containing index filessegment - segment descriptorreplaceExisting - overwrites existing objects if true, else leaves existing objects unchanged on conflict.
                        The behavior of the indexer determines whether this should be true or false. For example,
                        since Tranquility does not guarantee that replica tasks will generate indexes with the same
                        data, the first segment pushed should be favored since otherwise multiple historicals may
                        load segments with the same identifier but different contents which is a bad situation. On
                        the other hand, indexers that maintain exactly-once semantics by storing checkpoint data can
                        lose or repeat data if it fails to write a segment because it already exists and overwriting
                        is not permitted. This situation can occur if a task fails after pushing to deep storage but
                        before writing to the metadata storage, see: https://github.com/druid-io/druid/issues/5161.
                        If replaceExisting is true, existing objects MUST be overwritten, since failure to do so
                        will break exactly-once semantics. If replaceExisting is false, existing objects SHOULD be
                        prioritized but it is acceptable if they are overwritten (deep storages may be eventually
                        consistent or otherwise unable to support transactional writes).IOExceptionpublic Map<String,Object> makeLoadSpec(URI uri)
makeLoadSpec in interface DataSegmentPusherCopyright © 2011–2018. All rights reserved.