public class AzureDataSegmentPusher extends Object implements DataSegmentPusher
JOINER| Constructor and Description | 
|---|
AzureDataSegmentPusher(AzureStorage azureStorage,
                      AzureAccountConfig config,
                      com.fasterxml.jackson.databind.ObjectMapper jsonMapper)  | 
| Modifier and Type | Method and Description | 
|---|---|
File | 
createSegmentDescriptorFile(com.fasterxml.jackson.databind.ObjectMapper jsonMapper,
                           DataSegment segment)  | 
List<String> | 
getAllowedPropertyPrefixesForHadoop()
Property prefixes that should be added to the "allowedHadoopPrefix" config for passing down to Hadoop jobs. 
 | 
Map<String,String> | 
getAzurePaths(DataSegment segment)  | 
String | 
getPathForHadoop()  | 
String | 
getPathForHadoop(String dataSource)
Deprecated.  
 | 
Map<String,Object> | 
makeLoadSpec(URI uri)  | 
DataSegment | 
push(File indexFilesDir,
    DataSegment segment,
    boolean replaceExisting)
Pushes index files and segment descriptor to deep storage. 
 | 
DataSegment | 
uploadDataSegment(DataSegment segment,
                 int version,
                 long size,
                 File compressedSegmentData,
                 File descriptorFile,
                 Map<String,String> azurePaths,
                 boolean replaceExisting)  | 
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitgetDefaultStorageDir, getStorageDir, makeIndexPathName@Inject public AzureDataSegmentPusher(AzureStorage azureStorage, AzureAccountConfig config, com.fasterxml.jackson.databind.ObjectMapper jsonMapper)
@Deprecated public String getPathForHadoop(String dataSource)
getPathForHadoop in interface DataSegmentPusherpublic String getPathForHadoop()
getPathForHadoop in interface DataSegmentPusherpublic List<String> getAllowedPropertyPrefixesForHadoop()
DataSegmentPushergetAllowedPropertyPrefixesForHadoop in interface DataSegmentPusherpublic File createSegmentDescriptorFile(com.fasterxml.jackson.databind.ObjectMapper jsonMapper, DataSegment segment) throws IOException
IOExceptionpublic Map<String,String> getAzurePaths(DataSegment segment)
public DataSegment uploadDataSegment(DataSegment segment, int version, long size, File compressedSegmentData, File descriptorFile, Map<String,String> azurePaths, boolean replaceExisting) throws com.microsoft.azure.storage.StorageException, IOException, URISyntaxException
com.microsoft.azure.storage.StorageExceptionIOExceptionURISyntaxExceptionpublic DataSegment push(File indexFilesDir, DataSegment segment, boolean replaceExisting) throws IOException
DataSegmentPusherpush in interface DataSegmentPusherindexFilesDir - directory containing index filessegment - segment descriptorreplaceExisting - overwrites existing objects if true, else leaves existing objects unchanged on conflict.
                        The behavior of the indexer determines whether this should be true or false. For example,
                        since Tranquility does not guarantee that replica tasks will generate indexes with the same
                        data, the first segment pushed should be favored since otherwise multiple historicals may
                        load segments with the same identifier but different contents which is a bad situation. On
                        the other hand, indexers that maintain exactly-once semantics by storing checkpoint data can
                        lose or repeat data if it fails to write a segment because it already exists and overwriting
                        is not permitted. This situation can occur if a task fails after pushing to deep storage but
                        before writing to the metadata storage, see: https://github.com/druid-io/druid/issues/5161.
                        If replaceExisting is true, existing objects MUST be overwritten, since failure to do so
                        will break exactly-once semantics. If replaceExisting is false, existing objects SHOULD be
                        prioritized but it is acceptable if they are overwritten (deep storages may be eventually
                        consistent or otherwise unable to support transactional writes).IOExceptionpublic Map<String,Object> makeLoadSpec(URI uri)
makeLoadSpec in interface DataSegmentPusherCopyright © 2011–2018. All rights reserved.