ddputils.dll.mui Microsoft Data Deduplication Common Library 9d16ee5eabc8e84309a102ba476e3522

File info

File name: ddputils.dll.mui
Size: 129536 byte
MD5: 9d16ee5eabc8e84309a102ba476e3522
SHA1: d6a2e9cd733855b817599c5eb98bdd3949d70c0a
SHA256: e47c13a41ad631986ce6e82b233b9720697ef6647735c38dc59e0ac7ca1165b4
Operating systems: Windows 10
Extension: MUI

Translations messages and strings

If an error occurred or the following message in English (British) language and you cannot find a solution, than check answer in English. Table below helps to know how correctly this phrase sounds in English.

id English (British) English
100ddp ddp
1001Operation: Operation:
1002Context: Context:
1003Error-specific details: Error-specific details:
1004Failure: Failure:
1011Error Error
1012Volume name Volume name
1013Shadow copy volume Shadow copy volume
1014Configuration file Configuration file
1015The domain controller is unavailable. The domain controller is unavailable.
1016Server Server
1017Domain Domain
1018File name File name
1020Directory Directory
1021Chunk store Chunk store
1022Chunk ID Chunk ID
1023Stream map Stream map
1024Chunk store container Chunk store container
1025File path File path
1026File ID File ID
1027Chunk size Chunk size
1028Chunk offset Chunk offset
1029Chunk flags Chunk flags
1030Recorded time Recorded time
1031Error message Error message
1034Source context Source context
1037Inner error context Inner error context
1038Error timestamp Error timestamp
1039File offset File offset
1040Failure reason Failure reason
1041Retry count Retry count
1042Request ID Request ID
1043Stream map count Stream map count
1044Chunk count Chunk count
1045Data size Data size
2001Starting File Server Deduplication Service. Starting File Server Deduplication Service.
2002Stopping the Data Deduplication service. Stopping the Data Deduplication service.
2003Checking the File Server Deduplication global configuration store. Checking the File Server Deduplication global configuration store.
2101Initializing the data deduplication mini-filter. Initializing the data deduplication mini-filter.
2105Sending backup components list to VSS system. Sending backup components list to VSS system.
2106Preparing for backup. Preparing for backup.
2107Performing pre-restore operations. Performing pre-restore operations.
2108Performing post-restore operations. Performing post-restore operations.
2110Processing File Server Deduplication event. Processing File Server Deduplication event.
2111Creating a chunk store. Creating a chunk store.
2112Initializing chunk store. Initializing chunk store.
2113Uninitializing chunk store. Uninitializing chunk store.
2114Creating a chunk store session. Creating a chunk store session.
2115Committing a chunk store session. Committing a chunk store session.
2116Aborting a chunk store session. Aborting a chunk store session.
2117Initiating creation of a chunk store stream. Initiating creation of a chunk store stream.
2118Inserting a new chunk to a chunk store stream. Inserting a new chunk to a chunk store stream.
2119Inserting an existing chunk to a chunk stream. Inserting an existing chunk to a chunk stream.
2120Committing creation of a chunk store stream. Committing creation of a chunk store stream.
2121Aborting creation of a chunk store stream. Aborting creation of a chunk store stream.
2122Committing changes to a chunk store container. Committing changes to a chunk store container.
2123Changes made to a chunk store container have been flushed to disk. Changes made to a chunk store container have been flushed to disk.
2124Making a new chunk store container ready to use. Making a new chunk store container ready to use.
2125Rolling back the last committed changes to a chunk store container. Rolling back the last committed changes to a chunk store container.
2126Marking a chunk store container as read-only. Marking a chunk store container as read-only.
2127Enumerating all containers in a chunk store. Enumerating all containers in a chunk store.
2128Preparing a chunk store container for chunk insertion. Preparing a chunk store container for chunk insertion.
2129Initializing a new chunk store container. Initializing a new chunk store container.
2130Opening an existing chunk store container. Opening an existing chunk store container.
2131Inserting a new chunk to a chunk store container. Inserting a new chunk to a chunk store container.
2132Repairing a chunk store stamp file. Repairing a chunk store stamp file.
2133Creating a chunk store stamp file. Creating a chunk store stamp file.
2134Opening a chunk store stream. Opening a chunk store stream.
2135Reading stream map entries from a chunk store stream. Reading stream map entries from a chunk store stream.
2136Reading a chunk store chunk. Reading a chunk store chunk.
2137Closing a chunk store stream. Closing a chunk store stream.
2138Reading a chunk store container. Reading a chunk store container.
2139Opening a chunk store container log file. Opening a chunk store container log file.
2140Reading a chunk store container log file. Reading a chunk store container log file.
2141Writing entries to a chunk store container log file. Writing entries to a chunk store container log file.
2142Enumerating chunk store container log files. Enumerating chunk store container log files.
2143Deleting chunk store container log files. Deleting chunk store container log files.
2144Reading a chunk store container bitmap file. Reading a chunk store container bitmap file.
2145Writing a chunk store container bitmap file. Writing a chunk store container bitmap file.
2146Deleting a chunk store container bitmap file. Deleting a chunk store container bitmap file.
2147Starting chunk store garbage collection. Starting chunk store garbage collection.
2148Indexing active chunk references. Indexing active chunk references.
2149Processing deleted chunk store streams. Processing deleted chunk store streams.
2150Identifying unreferenced chunks. Identifying unreferenced chunks.
2151Enumerating the chunk store. Enumerating the chunk store.
2152Initializing the chunk store enumerator. Initializing the chunk store enumerator.
2153Initializing the stream map parser. Initializing the stream map parser.
2154Iterating the stream map. Iterating the stream map.
2155Initializing chunk store compaction. Initializing chunk store compaction.
2156Compacting chunk store containers. Compacting chunk store containers.
2157Initializing stream map compaction reconciliation. Initializing stream map compaction reconciliation.
2158Reconciling stream maps due to data compaction. Reconciling stream maps due to data compaction.
2159Initializing chunk store reconciliation. Initializing chunk store reconciliation.
2160Reconciling duplicate chunks in the chunk store. Reconciling duplicate chunks in the chunk store.
2161Initializing the deduplication garbage collection job. Initializing the deduplication garbage collection job.
2162Running the deduplication garbage collection job. Running the deduplication garbage collection job.
2163Canceling the deduplication garbage collection job. Canceling the deduplication garbage collection job.
2164Waiting for the deduplication garbage collection job to complete. Waiting for the deduplication garbage collection job to complete.
2165Initializing the deduplication job. Initializing the deduplication job.
2166Running the deduplication job. Running the deduplication job.
2167Canceling the deduplication job. Canceling the deduplication job.
2168Waiting for the deduplication to complete. Waiting for the deduplication to complete.
2169Initializing the deduplication scrubbing job. Initializing the deduplication scrubbing job.
2170Running the deduplication scrubbing job. Running the deduplication scrubbing job.
2171Canceling the deduplication scrubbing job. Canceling the deduplication scrubbing job.
2172Waiting for the deduplication scrubbing job to complete. Waiting for the deduplication scrubbing job to complete.
2173Opening a corruption log file. Opening a corruption log file.
2174Reading a corruption log file. Reading a corruption log file.
2175Writing an entry to a corruption log file. Writing an entry to a corruption log file.
2176Enumerating corruption log files. Enumerating corruption log files.
2206Creating a chunk store chunk sequence. Creating a chunk store chunk sequence.
2207Adding a chunk to a chunk store sequence. Adding a chunk to a chunk store sequence.
2208Completing creation of a chunk store sequence. Completing creation of a chunk store sequence.
2209Reading a chunk store sequence. Reading a chunk store sequence.
2210Continuing a chunk store sequence. Continuing a chunk store sequence.
2211Aborting a chunk store sequence. Aborting a chunk store sequence.
2212Initializing the deduplication analysis job. Initializing the deduplication analysis job.
2213Running the deduplication analysis job. Running the deduplication analysis job.
2214Canceling the deduplication analysis job. Canceling the deduplication analysis job.
2215Waiting for the deduplication analysis job to complete. Waiting for the deduplication analysis job to complete.
2216Repair chunk store container header. Repair chunk store container header.
2217Repair chunk store container redirection table. Repair chunk store container redirection table.
2218Repair chunk store chunk. Repair chunk store chunk.
2219Clone chunk store container. Clone chunk store container.
2220Scrubbing chunk store. Scrubbing chunk store.
2221Detecting corruption store corruptions. Detecting corruption store corruptions.
2222Loading the deduplication corruption logs. Loading the deduplication corruption logs.
2223Cleaning up the deduplication corruption logs. Cleaning up the deduplication corruption logs.
2224Determining the set of user files affected by chunk store corruptions. Determining the set of user files affected by chunk store corruptions.
2225Reporting corruptions. Reporting corruptions.
2226Estimating memory requirement for the deduplication scrubbing job. Estimating memory requirement for the deduplication scrubbing job.
2227Deep garbage collection initialization has started. Deep garbage collection initialization has started.
2228Starting deep garbage collection on stream map containers. Starting deep garbage collection on stream map containers.
2229Starting deep garbage collection on data containers. Starting deep garbage collection on data containers.
2230Initialize bitmaps on containers Initialize bitmaps on containers
2231Scanning the reparse point index to determine which stream map is being referenced. Scanning the reparse point index to determine which stream map is being referenced.
2232Saving deletion bitmap. Saving deletion bitmap.
2233Scan the stream map containers to mark referenced chunks. Scan the stream map containers to mark referenced chunks.
2234Convert bitmap to chunk delete log Convert bitmap to chunk delete log
2235Compact Data Containers Compact Data Containers
2236Compact Stream Map Containers Compact Stream Map Containers
2237Change a chunk store container generation. Change a chunk store container generation.
2238Start change logging. Start change logging.
2239Stop change logging. Stop change logging.
2240Add a merged target chunk store container. Add a merged target chunk store container.
2241Processing tentatively deleted chunks. Processing tentatively deleted chunks.
2242Check version of chunk store. Check version of chunk store.
2243Initializing the corruption table. Initializing the corruption table.
2244Writing out the corruption table. Writing out the corruption table.
2245Deleting the corruption table file. Deleting the corruption table file.
2246Repairing corruptions. Repairing corruptions.
2247Updating corruption table with new logs. Updating corruption table with new logs.
2248Destroying chunk store. Destroying chunk store.
2249Marking chunk store as deleted. Marking chunk store as deleted.
2250Inserting corruption entry into table. Inserting corruption entry into table.
2251Checking chunk store consistency. Checking chunk store consistency.
2252Updating a chunk store file list. Updating a chunk store file list.
2253Recovering a chunk store file list from redundancy. Recovering a chunk store file list from redundancy.
2254Adding an entry to a chunk store file list. Adding an entry to a chunk store file list.
2255Replacing an entry in a chunk store file list. Replacing an entry in a chunk store file list.
2256Deleting an entry in a chunk store file list. Deleting an entry in a chunk store file list.
2257Reading a chunk store file list. Reading a chunk store file list.
2258Reading a chunk store container directory file. Reading a chunk store container directory file.
2259Writing a chunk store container directory file. Writing a chunk store container directory file.
2260Deleting a chunk store container directory file. Deleting a chunk store container directory file.
2261Setting FileSystem allocation for chunk store container file. Setting FileSystem allocation for chunk store container file.
2262Initializing the deduplication unoptimization job. Initializing the deduplication unoptimization job.
2263Running the deduplication unoptimization job. Running the deduplication unoptimization job.
2264Restoring dedup file Restoring dedup file
2265Reading dedup information Reading dedup information
2266Building container list Building container list
2267Building read plan Building read plan
2268Executing read plan Executing read plan
2269Running deep scrubbing Running deep scrubbing
2270Scanning reparse point index during deep scrub Scanning reparse point index during deep scrub
2271Logging reparse point during deep scrub Logging reparse point during deep scrub
2272Scanning stream map containers during deep scrub Scanning stream map containers during deep scrub
2273Scrubbing a stream map container Scrubbing a stream map container
2274Logging a stream map's entries during deep scrub Logging a stream map's entries during deep scrub
2275Reading a container's redirection table during deep scrub Reading a container's redirection table during deep scrub
2276Scanning data containers during deep scrub Scanning data containers during deep scrub
2277Scrubbing a data container Scrubbing a data container
2278Scrubbing a data chunk Scrubbing a data chunk
2279Verifying SM entry to DC hash link Verifying SM entry to DC hash link
2280Logging a record during deep scrub Logging a record during deep scrub
2281Writing a batch of log records during deep scrub Writing a batch of log records during deep scrub
2282Finalizing a deep scrub temporary log Finalizing a deep scrub temporary log
2283Deep scrubbing log manager log record Deep scrubbing log manager log record
2284Finalizing deep scrub log manager Finalizing deep scrub log manager
2285Initializing deep scrub chunk index table Initializing deep scrub chunk index table
2286Inserting a chunk into deep scrub chunk index table Inserting a chunk into deep scrub chunk index table
2287Looking up a chunk from deep scrub chunk index table Looking up a chunk from deep scrub chunk index table
2288Rebuilding a chunk index table during deep scrub Rebuilding a chunk index table during deep scrub
2289Resetting the deep scrubbing logger cache Resetting the deep scrubbing logger cache
2290Resetting the deep scrubbing log manager Resetting the deep scrubbing log manager
2291Scanning hotspot containers during deep scrub Scanning hotspot containers during deep scrub
2292Scrubbing a hotspot container Scrubbing a hotspot container
2293Scrubbing the hotspot table Scrubbing the hotspot table
2294Cleaning up the deduplication deep scrub corruption logs Cleaning up the deduplication deep scrub corruption logs
2295Computing deduplication file metadata Computing deduplication file metadata
2296Scanning recall bitmap during deep scrub Scanning recall bitmap during deep scrub
2297Loading a heat map for a user file Loading a heat map for a user file
2298Saving a heat map for a user file Saving a heat map for a user file
2299Inserting a hot chunk to a chunk stream. Inserting a hot chunk to a chunk stream.
2300Deleting a heat map for a user file Deleting a heat map for a user file
2301Creating shadow copy set. Creating shadow copy set.
2302Initializing scan for optimization. Initializing scan for optimization.
2303Scanning the NTFS USN journal Scanning the NTFS USN journal
2304Initializing the USN scanner Initializing the USN scanner
2305Start a new data chunkstore session Start a new data chunkstore session
2306commit a data chunkstore session commit a data chunkstore session
2307Initializing the deduplication data port job. Initializing the deduplication data port job.
2308Running the deduplication data port job. Running the deduplication data port job.
2309Canceling the deduplication data port job. Canceling the deduplication data port job.
2310Waiting for the deduplication data port job to complete. Waiting for the deduplication data port job to complete.
2311Lookup chunks request. Lookup chunks request.
2312Insert chunks request. Insert chunks request.
2313Commit stream maps request. Commit stream maps request.
2314Get streams request. Get streams request.
2315Get chunks request. Get chunks request.
2401Initializing workload manager. Initializing workload manager.
2402Canceling a job. Canceling a job.
2403Enqueue a job. Enqueue a job.
2404Initialize job manifest. Initialize job manifest.
2405Launch a job host process. Launch a job host process.
2406Validate a job host process. Validate a job host process.
2407Initializing a job. Initializing a job.
2408Terminate a job host process. Terminate a job host process.
2409Uninitializing workload manager. Uninitializing workload manager.
2410Handshaking with a job. Handshaking with a job.
2411Job completion callback. Job completion callback.
2412Running a job. Running a job.
2413Checking ownership of Csv volume. Checking ownership of Csv volume.
2414Adding Csv volume for monitoring. Adding Csv volume for monitoring.
5001TRUE TRUE
5002FALSE FALSE
5003
5005Unknown error Unknown error
5101Data Deduplication Service Data Deduplication Service
5102The Data Deduplication service enables the deduplication and compression of data on selected volumes in order to optimize disk space used. If this service is stopped, optimization will no longer occur but access to already optimized data will continue to function. The Data Deduplication service enables the deduplication and compression of data on selected volumes in order to optimize disk space used. If this service is stopped, optimization will no longer occur but access to already optimized data will continue to function.
5105Dedup Dedup
5106The Data Deduplication filter driver enables read/write I/O to deduplicated files. The Data Deduplication filter driver enables read/write I/O to deduplicated files.
5201The chunk store on volume %s, Select this if you are using optimized backup. The chunk store on volume %s, Select this if you are using optimized backup.
5202Data deduplication configuration on volume %s Data deduplication configuration on volume %s
5203Data Deduplication Volume Shadow Copy Service Data Deduplication Volume Shadow Copy Service
5204Data Deduplication VSS writer guided backup applications to back up volumes with deduplication. Data Deduplication VSS writer guided backup applications to back up volumes with deduplication.
5205Data deduplication state on volume %s Data deduplication state on volume %s
5301Data deduplication optimization Data deduplication optimization
5302Data deduplication garbage collection Data deduplication garbage collection
5303Data deduplication scrubbing Data deduplication scrubbing
5304Data deduplication unoptimization Data deduplication unoptimization
5305Queued Queued
5306Initializing Initializing
5307Running Running
5308Completed Completed
5309Pending Cancel Pending Cancel
5310Canceled Canceled
5311Failed Failed
5312Data deduplication scrubbing job should be run on this volume. Data deduplication scrubbing job should be run on this volume.
5313An unsupported path was detected and will be skipped. An unsupported path was detected and will be skipped.
5314Data deduplication dataport Data deduplication dataport
5401This task runs the data deduplication optimization job on all enabled volumes. This task runs the data deduplication optimization job on all enabled volumes.
5402This task runs the data deduplication garbage collection job on all enabled volumes. This task runs the data deduplication garbage collection job on all enabled volumes.
5403This task runs the data deduplication scrubbing job on all enabled volumes. This task runs the data deduplication scrubbing job on all enabled volumes.
5404This task runs the data deduplication unoptimization job on all enabled volumes. This task runs the data deduplication unoptimization job on all enabled volumes.
5405This task runs the data deduplication data port job on all enabled volumes. This task runs the data deduplication data port job on all enabled volumes.
0x00565301Reconciliation of chunk store is due. Reconciliation of chunk store is due.
0x00565302There are no actions associated with this job. There are no actions associated with this job.
0x00565303Data deduplication cannot runing this job on this Csv volume on this node. Data deduplication cannot runing this job on this Csv volume on this node.
0x00565304Data deduplication cannot runing this cmdlet on this Csv volume on this node. Data deduplication cannot runing this cmdlet on this Csv volume on this node.
0x10000001Reporting Reporting
0x10000002Filter Filter
0x10000003Kernel mode stream store Kernel mode stream store
0x10000004Kernel mode chunk store Kernel mode chunk store
0x10000005Kernel mode chunk container Kernel mode chunk container
0x10000006Kernel mode file cache Kernel mode file cache
0x30000000Info Info
0x30000001Start Start
0x30000002Stop Stop
0x50000003Warning Warning
0x50000004Information Information
0x70000001Data Deduplication Optimization Task Data Deduplication Optimization Task
0x70000002Data Deduplication Garbage Collection Task Data Deduplication Garbage Collection Task
0x70000003Data Deduplication Scrubbing Task Data Deduplication Scrubbing Task
0x70000004Data Deduplication Unoptimization Task Data Deduplication Unoptimization Task
0x70000005Open stream store stream Open stream store stream
0x70000006Prepare for paging IO Prepare for paging IO
0x70000007Read stream map Read stream map
0x70000008Read chunks Read chunks
0x70000009Compute checksum Compute checksum
0x7000000AGet container entry Get container entry
0x7000000BGet maximum generation for container Get maximum generation for container
0x7000000COpen chunk container Open chunk container
0x7000000DInitialize chunk container redirection table Initialize chunk container redirection table
0x7000000EValidate chunk container redirection table Validate chunk container redirection table
0x7000000FGet chunk container valid data length Get chunk container valid data length
0x70000010Get offset from chunk container redirection table Get offset from chunk container redirection table
0x70000011Read chunk container block Read chunk container block
0x70000012Clear chunk container block Clear chunk container block
0x70000013Copy chunk Copy chunk
0x70000014Initialize file cache Initialize file cache
0x70000015Map file cache data Map file cache data
0x70000016Unpin file cache data Unpin file cache data
0x70000017Copy file cache data Copy file cache data
0x70000018Read underlying file cache data Read underlying file cache data
0x70000019Get chunk container file size Get chunk container file size
0x7000001APin stream map Pin stream map
0x7000001BPin chunk container Pin chunk container
0x7000001CPin chunk Pin chunk
0x7000001DAllocate pool buffer Allocate pool buffer
0x7000001EUnpin chunk container Unpin chunk container
0x7000001FUnpin chunk Unpin chunk
0x70000020Dedup read processing Dedup read processing
0x70000021Get first stream map entry Get first stream map entry
0x70000022Read chunk metadata Read chunk metadata
0x70000023Read chunk data Read chunk data
0x70000024Reference TlCache data Reference TlCache data
0x70000025Read chunk data from stream store Read chunk data from stream store
0x70000026Assemble chunk data Assemble chunk data
0x70000027Decompress chunk data Decompress chunk data
0x70000028Copy chunk data in to user buffer Copy chunk data in to user buffer
0x70000029Insert chunk data in to tlcache Insert chunk data in to tlcache
0x7000002ARead data from dedup reparse point file Read data from dedup reparse point file
0x7000002BPrepare stream map Prepare stream map
0x7000002CPatch clean ranges Patch clean ranges
0x7000002DWriting data to dedup file Writing data to dedup file
0x7000002EQueue write request on dedup file Queue write request on dedup file
0x7000002FDo copy on write work on dedup file Do copy on write work on dedup file
0x70000030Do full recall on dedup file Do full recall on dedup file
0x70000031Do partial recall on dedup file Do partial recall on dedup file
0x70000032Do dummy paging read on dedup file Do dummy paging read on dedup file
0x70000033Read clean data for recalling file Read clean data for recalling file
0x70000034Write clean data to dedup file normally Write clean data to dedup file normally
0x70000035Write clean data to dedup file paged Write clean data to dedup file paged
0x70000036Recall dedup file using paging Io Recall dedup file using paging Io
0x70000037Flush dedup file after recall Flush dedup file after recall
0x70000038Update bitmap after recall on dedup file Update bitmap after recall on dedup file
0x70000039Delete dedup reparse point Delete dedup reparse point
0x7000003AOpen dedup file Open dedup file
0x7000003BLocking user buffer for read Locking user buffer for read
0x7000003CGet system address for MDL Get system address for MDL
0x7000003DRead clean dedup file Read clean dedup file
0x7000003EGet range state Get range state
0x7000003FGet chunk body Get chunk body
0x70000040Release chunk Release chunk
0x70000041Release decompress chunk context Release decompress chunk context
0x70000042Prepare decompress chunk context Prepare decompress chunk context
0x70000043Copy data to compressed buffer Copy data to compressed buffer
0x70000044Release data from TL Cache Release data from TL Cache
0x70000045Queue async read request Queue async read request
0x80565301The requested object was not found. The requested object was not found.
0x80565302One (or more) of the arguments given to the task scheduler is not valid. One (or more) of the arguments given to the task scheduler is not valid.
0x80565303The specified object already exists. The specified object already exists.
0x80565304The specified path was not found. The specified path was not found.
0x80565305The specified user is invalid. The specified user is invalid.
0x80565306The specified path is invalid. The specified path is invalid.
0x80565307The specified name is invalid. The specified name is invalid.
0x80565308The specified property is out of range. The specified property is out of range.
0x80565309A required filter driver is either not installed, not loaded, or not ready for service. A required filter driver is either not installed, not loaded, or not ready for service.
0x8056530AThere is insufficient disk space to perform the requested operation. There is insufficient disk space to perform the requested operation.
0x8056530BThe specified volume type is not supported. Deduplication is supported on fixed, write-enabled NTFS data volumes and CSV backed by NTFS data volumes. The specified volume type is not supported. Deduplication is supported on fixed, write-enabled NTFS data volumes and CSV backed by NTFS data volumes.
0x8056530CData deduplication encountered an unexpected error. Check the Data Deduplication Operational event log for more information. Data deduplication encountered an unexpected error. Check the Data Deduplication Operational event log for more information.
0x8056530DThe specified scan log cursor has expired. The specified scan log cursor has expired.
0x8056530EThe file system might be corrupted. Please run the CHKDSK utility. The file system might be corrupted. Please run the CHKDSK utility.
0x8056530FA volume shadow copy could not be created or was unexpectedly deleted. A volume shadow copy could not be created or was unexpectedly deleted.
0x80565310Data deduplication encountered a corrupted XML configuration file. Data deduplication encountered a corrupted XML configuration file.
0x80565311The Data Deduplication service could not access the global configuration because the Cluster service is not running. The Data Deduplication service could not access the global configuration because the Cluster service is not running.
0x80565312The Data Deduplication service could not access the global configuration because it has not been installed yet. The Data Deduplication service could not access the global configuration because it has not been installed yet.
0x80565313Data deduplication failed to access the volume. It may be offline. Data deduplication failed to access the volume. It may be offline.
0x80565314The module encountered an invalid parameter or a valid parameter with an invalid value, or an expected module parameter was not found. Check the operational event log for more information. The module encountered an invalid parameter or a valid parameter with an invalid value, or an expected module parameter was not found. Check the operational event log for more information.
0x80565315An attempt was made to perform an initialization operation when initialization has already been completed. An attempt was made to perform an initialization operation when initialization has already been completed.
0x80565316An attempt was made to perform an uninitialization operation when that operation has already been completed. An attempt was made to perform an uninitialization operation when that operation has already been completed.
0x80565317The Data Deduplication service detected an internal folder that is not secure. To secure the folder, reinstall deduplication on the volume. The Data Deduplication service detected an internal folder that is not secure. To secure the folder, reinstall deduplication on the volume.
0x80565318Data chunking has already been initiated. Data chunking has already been initiated.
0x80565319An attempt was made to perform an operation from an invalid state. An attempt was made to perform an operation from an invalid state.
0x8056531AAn attempt was made to perform an operation before initialization. An attempt was made to perform an operation before initialization.
0x8056531BCall ::PushBuffer to continue chunking or ::Drain to enumerate any partial chunks. Call ::PushBuffer to continue chunking or ::Drain to enumerate any partial chunks.
0x8056531CThe Data Deduplication service detected multiple chunk store folders; however, only one chunk store folder is permitted. To fix this issue, reinstall deduplication on the volume. The Data Deduplication service detected multiple chunk store folders; however, only one chunk store folder is permitted. To fix this issue, reinstall deduplication on the volume.
0x8056531DThe data is invalid. The data is invalid.
0x8056531EThe process is in an unknown state. The process is in an unknown state.
0x8056531FThe process is not running. The process is not running.
0x80565320There was an error while opening the file. There was an error while opening the file.
0x80565321The job process could not start because the job was not found. The job process could not start because the job was not found.
0x80565322The client process ID does not match the ID of the host process that was started. The client process ID does not match the ID of the host process that was started.
0x80565323The specified volume is not enabled for deduplication. The specified volume is not enabled for deduplication.
0x80565324A zero-character chunk ID is not valid. A zero-character chunk ID is not valid.
0x80565325The index is filled to capacity. The index is filled to capacity.
0x80565327Session already exists. Session already exists.
0x80565328The compression format selected is not supported. The compression format selected is not supported.
0x80565329The compressed buffer is larger than the uncompressed buffer. The compressed buffer is larger than the uncompressed buffer.
0x80565330The buffer is not large enough. The buffer is not large enough.
0x8056533AIndex Scratch Log Error in: Seek, Read, Write, or Create Index Scratch Log Error in: Seek, Read, Write, or Create
0x8056533BThe job type is invalid. The job type is invalid.
0x8056533CPersistence layer enumeration error. Persistence layer enumeration error.
0x8056533DThe operation was cancelled. The operation was cancelled.
0x8056533EThis job will not run at the scheduled time because it requires more memory than is currently available. This job will not run at the scheduled time because it requires more memory than is currently available.
0x80565341The job was terminated while in a cancel or pending state. The job was terminated while in a cancel or pending state.
0x80565342The job was terminated while in a handshake pending state. The job was terminated while in a handshake pending state.
0x80565343The job was terminated due to a service shutdown. The job was terminated due to a service shutdown.
0x80565344The job was abandoned before starting. The job was abandoned before starting.
0x80565345The job process exited unexpectedly. The job process exited unexpectedly.
0x80565346The Data Deduplication service detected that the container cannot be compacted or updated because it has reached the maximum generation version. The Data Deduplication service detected that the container cannot be compacted or updated because it has reached the maximum generation version.
0x80565347The corruption log has reached its maximum size. The corruption log has reached its maximum size.
0x80565348The data deduplication scrubbing job failed to process the corruption logs. The data deduplication scrubbing job failed to process the corruption logs.
0x80565349Data deduplication failed to create new chunk store container files. Allocate more space to the volume. Data deduplication failed to create new chunk store container files. Allocate more space to the volume.
0x80565350An error occurred while opening the file because the file was in use. An error occurred while opening the file because the file was in use.
0x80565351An error was discovered while deduplicating the file. The file is now skipped. An error was discovered while deduplicating the file. The file is now skipped.
0x80565352File Server Deduplication encountered corruption while enumerating chunks in a chunk store. File Server Deduplication encountered corruption while enumerating chunks in a chunk store.
0x80565353The scan log is not valid. The scan log is not valid.
0x80565354The data is invalid due to checksum (CRC) mismatch error. The data is invalid due to checksum (CRC) mismatch error.
0x80565355Data deduplication encountered file corruption error. Data deduplication encountered file corruption error.
0x80565356Job completed with some errors. Check event logs for more details. Job completed with some errors. Check event logs for more details.
0x80565357Data deduplication is not supported on the version of the chunk store found on this volume. Data deduplication is not supported on the version of the chunk store found on this volume.
0x80565358Data deduplication encountered an unknown version of chunk store on this volume. Data deduplication encountered an unknown version of chunk store on this volume.
0x80565359The job was assigned less memory than the minimum it needs to run. The job was assigned less memory than the minimum it needs to run.
0x8056535AThe data deduplication job schedule cannot be modified. The data deduplication job schedule cannot be modified.
0x8056535BThe valid data length of chunk store container is misaligned. The valid data length of chunk store container is misaligned.
0x8056535CFile access is denied. File access is denied.
0x8056535DData deduplication job stopped due to too many corrupted files. Data deduplication job stopped due to too many corrupted files.
0x8056535EData deduplication job stopped due to an internal error in the BCrypt SHA-512 provider. Data deduplication job stopped due to an internal error in the BCrypt SHA-512 provider.
0x8056535FData deduplication job stopped for store reconciliation. Data deduplication job stopped for store reconciliation.
0x80565360File skipped for deduplication due to its size. File skipped for deduplication due to its size.
0x80565361File skipped due to deduplication retry limit. File skipped due to deduplication retry limit.
0x80565362The pipeline buffer cache is full. The pipeline buffer cache is full.
0x80565363Another Data deduplication job already running on this volume. Another Data deduplication job already running on this volume.
0x80565364Data deduplication cannot run this job on this Csv volume on this node. Try running the job on the Csv volume resource owner node. Data deduplication cannot run this job on this Csv volume on this node. Try running the job on the Csv volume resource owner node.
0x80565365Data deduplication failed to initialize cluster state on this node. Data deduplication failed to initialize cluster state on this node.
0x80565366Optimization of the range was aborted by the dedup filter driver. Optimization of the range was aborted by the dedup filter driver.
0x80565367The operation could not be performed because of a concurrent IO operation. The operation could not be performed because of a concurrent IO operation.
0x80565368Data deduplication encountered an unexpected error. Verify deduplication is enabled on all nodes if in a cluster configuration. Check the Data Deduplication Operational event log for more information. Data deduplication encountered an unexpected error. Verify deduplication is enabled on all nodes if in a cluster configuration. Check the Data Deduplication Operational event log for more information.
0x80565369Data access for data deduplicated CSV volumes can only be disabled when in maintenance mode. Check the Data Deduplication Operational event log for more information. Data access for data deduplicated CSV volumes can only be disabled when in maintenance mode. Check the Data Deduplication Operational event log for more information.
0x8056536AData Deduplication encountered an IO device error that may indicate a hardware fault in the storage subsystem. Data Deduplication encountered an IO device error that may indicate a hardware fault in the storage subsystem.
0x8056536BData deduplication cannot run this cmdlet on this Csv volume on this node. Try running the cmdlet on the Csv volume resource owner node. Data deduplication cannot run this cmdlet on this Csv volume on this node. Try running the cmdlet on the Csv volume resource owner node.
0x8056536CDeduplication job not supported during rolling cluster upgrade. Deduplication job not supported during rolling cluster upgrade.
0x8056536DDeduplication setting not supported during rolling cluster upgrade. Deduplication setting not supported during rolling cluster upgrade.
0x8056536EData port job is not ready to accept requests. Data port job is not ready to accept requests.
0x8056536FData port request not accepted due to request count/size limit exceeded. Data port request not accepted due to request count/size limit exceeded.
0x80565370Data port request completed with some errors. Check event logs for more details. Data port request completed with some errors. Check event logs for more details.
0x80565371Data port request failed. Check event logs for more details. Data port request failed. Check event logs for more details.
0x80565372Data port error accessing the hash index. Check event logs for more details. Data port error accessing the hash index. Check event logs for more details.
0x80565373Data port error accessing the stream store. Check event logs for more details. Data port error accessing the stream store. Check event logs for more details.
0x80565374Data port file stub error. Check event logs for more details. Data port file stub error. Check event logs for more details.
0x80565375Data port encountered a deduplication filter error. Check event logs for more details. Data port encountered a deduplication filter error. Check event logs for more details.
0x80565376Data port cannot commit stream map due to missing chunk. Check event logs for more details. Data port cannot commit stream map due to missing chunk. Check event logs for more details.
0x80565377Data port cannot commit stream map due to invalid stream map metadata. Check event logs for more details. Data port cannot commit stream map due to invalid stream map metadata. Check event logs for more details.
0x80565378Data port cannot commit stream map due to invalid stream map entry. Check event logs for more details. Data port cannot commit stream map due to invalid stream map entry. Check event logs for more details.
0x80565379Data port cannot retrieve job interface for volume. Check event logs for more details. Data port cannot retrieve job interface for volume. Check event logs for more details.
0x8056537AThe specified path is not supported. The specified path is not supported.
0x8056537B// Data port cannot decompress chunk. Check event logs for more details. // Data port cannot decompress chunk. Check event logs for more details.
0x8056537CData port cannot calculate chunk hash. Check event logs for more details. Data port cannot calculate chunk hash. Check event logs for more details.
0x8056537DData port cannot read chunk stream. Check event logs for more details. Data port cannot read chunk stream. Check event logs for more details.
0x8056537EThe target file is not a deduplicated file. Check event logs for more details. The target file is not a deduplicated file. Check event logs for more details.
0x8056537FThe target file is partially recalled. Check event logs for more details. The target file is partially recalled. Check event logs for more details.
0x90000001Data Deduplication Data Deduplication
0x90000002Application Application
0x91000001Data Deduplication Change Events Data Deduplication Change Events
0xB0001000Volume \"%1\" appears as disconnected and it is ignored by the service. You may want to rescan disks. Error: %2.%n%3 Volume \"%1\" appears as disconnected and it is ignored by the service. You may want to rescan disks. Error: %2.%n%3
0xB0001001The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\". Most likely the CPU is under heavy load. Error: %4.%n%5 The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\". Most likely the CPU is under heavy load. Error: %4.%n%5
0xB0001002The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\". Error: %4.%n%5 The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\". Error: %4.%n%5
0xB0001003The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\" during Safe Mode. The Data Deduplication service cannot start while in safe mode. Error: %4.%n%5 The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\" during Safe Mode. The Data Deduplication service cannot start while in safe mode. Error: %4.%n%5
0xB0001004A critical component required by Data Deduplication is not registered. This might happen if an error occurred during Windows setup, or if the computer does not have the Windows Server 2012 or later version of Deduplication service installed. The error returned from CoCreateInstance on class with CLSID %1 and Name \"%2\" on machine \"%3\" is %4.%n%5 A critical component required by Data Deduplication is not registered. This might happen if an error occurred during Windows setup, or if the computer does not have the Windows Server 2012 or later version of Deduplication service installed. The error returned from CoCreateInstance on class with CLSID %1 and Name \"%2\" on machine \"%3\" is %4.%n%5
0xB0001005Data Deduplication service is shutting down due to idle timeout.%n%1 Data Deduplication service is shutting down due to idle timeout.%n%1
0xB0001006Data Deduplication service is shutting down due to shutdown event from the Service Control Manager.%n%1 Data Deduplication service is shutting down due to shutdown event from the Service Control Manager.%n%1
0xB0001007Data Deduplication job of type \"%1\" on volume \"%2\" has completed with return code: %3%n%4 Data Deduplication job of type \"%1\" on volume \"%2\" has completed with return code: %3%n%4
0xB0001008Data Deduplication error: Unexpected error calling routine %1. hr = %2.%n%3 Data Deduplication error: Unexpected error calling routine %1. hr = %2.%n%3
0xB0001009Data Deduplication error: Unexpected error.%n%1 Data Deduplication error: Unexpected error.%n%1
0xB000100AData Deduplication warning: %1%nError: %2.%n%3 Data Deduplication warning: %1%nError: %2.%n%3
0xB000100BData Deduplication error: Unexpected COM error %1: %2. Error code: %3.%n%4 Data Deduplication error: Unexpected COM error %1: %2. Error code: %3.%n%4
0xB000100CData Deduplication was unable to access the following file or volume: \"%1\". This file or volume might be locked by another application right now, or you might need to give Local System access to it.%n%2 Data Deduplication was unable to access the following file or volume: \"%1\". This file or volume might be locked by another application right now, or you might need to give Local System access to it.%n%2
0xB000100DData Deduplication encountered an unexpected error during volume scan of volumes mounted at \"%1\" (\"%2\"). To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3 Data Deduplication encountered an unexpected error during volume scan of volumes mounted at \"%1\" (\"%2\"). To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3
0xB000100EData Deduplication was unable to create or access the shadow copy for volumes mounted at \"%1\" (\"%2\"). Possible causes include an improper Shadow Copy configuration, insufficient disk space, or extreme memory, I/O or CPU load of the system. To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3 Data Deduplication was unable to create or access the shadow copy for volumes mounted at \"%1\" (\"%2\"). Possible causes include an improper Shadow Copy configuration, insufficient disk space, or extreme memory, I/O or CPU load of the system. To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3
0xB000100FData Deduplication was unable to access volumes mounted at \"%1\" (\"%2\"). Make sure that dismount or format operations do not happen while running deduplication.%n%3 Data Deduplication was unable to access volumes mounted at \"%1\" (\"%2\"). Make sure that dismount or format operations do not happen while running deduplication.%n%3
0xB0001010Data Deduplication was unable to access a file or volume. Details:%n%n%1%n The volume may be inaccessible for I/O operations or marked read-only. In case of a cluster volume, this may be a transient failure during failover.%n%2 Data Deduplication was unable to access a file or volume. Details:%n%n%1%n The volume may be inaccessible for I/O operations or marked read-only. In case of a cluster volume, this may be a transient failure during failover.%n%2
0xB0001011Data Deduplication was unable to scan volume \"%1\" (\"%2\").%n%3 Data Deduplication was unable to scan volume \"%1\" (\"%2\").%n%3
0xB0001012Data Deduplication detected a corruption on file \"%1\" at offset (\"%2\"). If this condition persists then please restore the data from a previous backup. Corruption details: Structure=%3, Corruption type = %4, Additional data = %5%n%6 Data Deduplication detected a corruption on file \"%1\" at offset (\"%2\"). If this condition persists then please restore the data from a previous backup. Corruption details: Structure=%3, Corruption type = %4, Additional data = %5%n%6
0xB0001013Data Deduplication encountered failure while reconciling chunk store on volume \"%1\". The error code was %2. Reconciliation is disabled for the current optimization job.%n%3 Data Deduplication encountered failure while reconciling chunk store on volume \"%1\". The error code was %2. Reconciliation is disabled for the current optimization job.%n%3
0xB0001016Data Deduplication encountered corrupted chunk container %1 while performing full garbage collection. The corrupted chunk container is skipped.%n%2 Data Deduplication encountered corrupted chunk container %1 while performing full garbage collection. The corrupted chunk container is skipped.%n%2
0xB0001017Data Deduplication could not initialize change log under %1. The error code was %2.%n%3 Data Deduplication could not initialize change log under %1. The error code was %2.%n%3
0xB0001018Data Deduplication service could not mark chunk container %1 as reconciled. The error code was %2.%n%3 Data Deduplication service could not mark chunk container %1 as reconciled. The error code was %2.%n%3
0xB0001019A Data Deduplication configuration file is corrupted. The system or volume may need to be restored from backup.%n%1 A Data Deduplication configuration file is corrupted. The system or volume may need to be restored from backup.%n%1
0xB000101AData Deduplication was unable to save one of the configuration stores on volume \"%1\" due to a disk-full error: If the disk is full, please clean it up (extend the volume or delete some files). If the disk is not full, but there is a hard quota on the volume root, please delete, disable or increase this quota.%n%2 Data Deduplication was unable to save one of the configuration stores on volume \"%1\" due to a disk-full error: If the disk is full, please clean it up (extend the volume or delete some files). If the disk is not full, but there is a hard quota on the volume root, please delete, disable or increase this quota.%n%2
0xB000101BData Deduplication could not access global configuration since the cluster service is not running. Please start the cluster service and retry the operation.%n%1 Data Deduplication could not access global configuration since the cluster service is not running. Please start the cluster service and retry the operation.%n%1
0xB000101CShadow copy \"%1\" was deleted during storage report generation. Volume \"%2\" might be configured with inadequate shadow copy storage area. Data Deduplication could not process this volume.%n%3 Shadow copy \"%1\" was deleted during storage report generation. Volume \"%2\" might be configured with inadequate shadow copy storage area. Data Deduplication could not process this volume.%n%3
0xB000101DShadow copy creation failed for volume \"%1\" after retrying for %2 minutes because other shadow copies were being created. Reschedule the Data Deduplication for a less busy time.%n%3 Shadow copy creation failed for volume \"%1\" after retrying for %2 minutes because other shadow copies were being created. Reschedule the Data Deduplication for a less busy time.%n%3
0xB000101EVolume \"%1\" is not supported for shadow copy. It is possible that the volume was removed from the system. Data Deduplication service could not process this volume.%n%2 Volume \"%1\" is not supported for shadow copy. It is possible that the volume was removed from the system. Data Deduplication service could not process this volume.%n%2
0xB000101FThe volume \"%1\" has been deleted or removed from the system.%n%2 The volume \"%1\" has been deleted or removed from the system.%n%2
0xB0001020Shadow copy creation failed for volume \"%1\" with error %2. The volume might be configured with inadequate shadow copy storage area. File Serve Deduplication service could not process this volume.%n%3 Shadow copy creation failed for volume \"%1\" with error %2. The volume might be configured with inadequate shadow copy storage area. File Serve Deduplication service could not process this volume.%n%3
0xB0001021The file system on volume \"%1\" is potentially corrupted. Please run the CHKDSK utility to verify and fix the file system.%n%2 The file system on volume \"%1\" is potentially corrupted. Please run the CHKDSK utility to verify and fix the file system.%n%2
0xB0001022Data Deduplication detected an insecure internal folder. To secure the folder, reinstall deduplication on the volume again.%n%1 Data Deduplication detected an insecure internal folder. To secure the folder, reinstall deduplication on the volume again.%n%1
0xB0001023Data Deduplication could not find a chunk store on the volume.%n%1 Data Deduplication could not find a chunk store on the volume.%n%1
0xB0001024Data Deduplication detected multiple chunk store folders. To recover, reinstall deduplication on the volume.%n%1 Data Deduplication detected multiple chunk store folders. To recover, reinstall deduplication on the volume.%n%1
0xB0001025Data Deduplication detected conflicting chunk store folders: \"%1\" and \"%2\".%n%3 Data Deduplication detected conflicting chunk store folders: \"%1\" and \"%2\".%n%3
0xB0001026The data is invalid.%n%1 The data is invalid.%n%1
0xB0001027Data Deduplication scheduler failed to initialize with error \"%1\".%n%2 Data Deduplication scheduler failed to initialize with error \"%1\".%n%2
0xB0001028Data Deduplication failed to validate job type \"%1\" on volume \"%2\" with error \"%3\".%n%4 Data Deduplication failed to validate job type \"%1\" on volume \"%2\" with error \"%3\".%n%4
0xB0001029Data Deduplication failed to start job type \"%1\" on volume \"%2\" with error \"%3\".%n%4 Data Deduplication failed to start job type \"%1\" on volume \"%2\" with error \"%3\".%n%4
0xB000102CData Deduplication detected job type \"%1\" on volume \"%2\" uses too much memory. %3 MB is assigned. %4 MB is used.%n%5 Data Deduplication detected job type \"%1\" on volume \"%2\" uses too much memory. %3 MB is assigned. %4 MB is used.%n%5
0xB000102DData Deduplication detected job type \"%1\" on volume \"%2\" memory usage has dropped to desirable level.%n%3 Data Deduplication detected job type \"%1\" on volume \"%2\" memory usage has dropped to desirable level.%n%3
0xB000102EData Deduplication cancelled job type \"%1\" on volume \"%2\". It uses too much memory than the amount assigned to it.%n%3 Data Deduplication cancelled job type \"%1\" on volume \"%2\". It uses too much memory than the amount assigned to it.%n%3
0xB000102FData Deduplication cancelled job type \"%1\" on volume \"%2\". Memory resource is running low on the machine or in the job.%n%3 Data Deduplication cancelled job type \"%1\" on volume \"%2\". Memory resource is running low on the machine or in the job.%n%3
0xB0001030Data Deduplication job type \"%1\" on volume \"%2\" failed to report completion to the service with error: %3.%n%4 Data Deduplication job type \"%1\" on volume \"%2\" failed to report completion to the service with error: %3.%n%4
0xB0001031Data Deduplication detected a container cannot be compacted or updated because it has reached the maximum generation.%n%1 Data Deduplication detected a container cannot be compacted or updated because it has reached the maximum generation.%n%1
0xB0001032Data Deduplication corruption log \"%1\" is corrupted.%n%2 Data Deduplication corruption log \"%1\" is corrupted.%n%2
0xB0001033Data Deduplication corruption log \"%1\" has reached maximum allowed size \"%2\". Please run scrubbing job to process corruption log. No more corruptions will be reported until the log is processed.%n%3 Data Deduplication corruption log \"%1\" has reached maximum allowed size \"%2\". Please run scrubbing job to process corruption log. No more corruptions will be reported until the log is processed.%n%3
0xB0001034Data Deduplication corruption log \"%1\" has reached maximum allowed size \"%2\". No more corruptions will be reported until the log is processed.%n%3 Data Deduplication corruption log \"%1\" has reached maximum allowed size \"%2\". No more corruptions will be reported until the log is processed.%n%3
0xB0001035Data Deduplication scheduler failed to uninitialize with error \"%1\".%n%2 Data Deduplication scheduler failed to uninitialize with error \"%1\".%n%2
0xB0001036Data Deduplication detected a new container could not be created in a chunk store because it ran out of available container Id.%n%1 Data Deduplication detected a new container could not be created in a chunk store because it ran out of available container Id.%n%1
0xB0001037Data Deduplication full garbage collection phase 1 (cleaning file related metadata) on volume \"%1\" failed with error: %2. The job will continue with phase 2 execution (data chunk cleanup).%n%3 Data Deduplication full garbage collection phase 1 (cleaning file related metadata) on volume \"%1\" failed with error: %2. The job will continue with phase 2 execution (data chunk cleanup).%n%3
0xB0001039Data Deduplication full garbage collection could not achieve maximum space reclamation because delete logs for data container %1 could not be cleaned up.%n%2 Data Deduplication full garbage collection could not achieve maximum space reclamation because delete logs for data container %1 could not be cleaned up.%n%2
0xB000103ASome files could not be deduplicated because of FSRM Quota violations on volume %1. Files skipped are likely compressed or sparse files in folders which are at quota or close to their quota limit. Please consider increasing the quota limit for folders that are at their quota limit or close to it.%n%2 Some files could not be deduplicated because of FSRM Quota violations on volume %1. Files skipped are likely compressed or sparse files in folders which are at quota or close to their quota limit. Please consider increasing the quota limit for folders that are at their quota limit or close to it.%n%2
0xB000103BData Deduplication failed to dedup file %1 \"%2\" due to fatal error %3%n%4 Data Deduplication failed to dedup file %1 \"%2\" due to fatal error %3%n%4
0xB000103CData Deduplication encountered corruption while accessing a file in chunk store.%n%1 Data Deduplication encountered corruption while accessing a file in chunk store.%n%1
0xB000103DData Deduplication encountered corruption while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1 Data Deduplication encountered corruption while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1
0xB000103EData Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1 Data Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1
0xB000103FData Deduplication is unable to access file %1 because the file is in use.%n%2 Data Deduplication is unable to access file %1 because the file is in use.%n%2
0xB0001040Data Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store.%n%1 Data Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store.%n%1
0xB0001041Data Deduplication cannot run the job on volume %1 because the dedup store version compatiblity check failed with error %2.%n%3 Data Deduplication cannot run the job on volume %1 because the dedup store version compatiblity check failed with error %2.%n%3
0xB0001042Data Deduplication has disabled the volume %1 because it has discovered too many corruptions. Please run deep scrubbing on the volume.%n%2 Data Deduplication has disabled the volume %1 because it has discovered too many corruptions. Please run deep scrubbing on the volume.%n%2
0xB0001043Data Deduplication has detected a corrupt corruption metadata file on the store at %1. Please run deep scrubbing on the volume.%n%2 Data Deduplication has detected a corrupt corruption metadata file on the store at %1. Please run deep scrubbing on the volume.%n%2
0xB0001044Volume \"%1\" cannot be enabled for Data Deduplication. Data Deduplication does not support volumes larger than 64TB. Error: %2.%n%3 Volume \"%1\" cannot be enabled for Data Deduplication. Data Deduplication does not support volumes larger than 64TB. Error: %2.%n%3
0xB0001045Data Deduplication cannot be enabled on SIS volume \"%1\". Error: %2.%n%3 Data Deduplication cannot be enabled on SIS volume \"%1\". Error: %2.%n%3
0xB0001046File-system is configured for case-sensitive file/folder names. Data Deduplication does not support case-sensitive file-system mode.%n%1 File-system is configured for case-sensitive file/folder names. Data Deduplication does not support case-sensitive file-system mode.%n%1
0xB0001049Data Deduplication changed scrubbing job to read-only due to insufficient disk space.%n%1 Data Deduplication changed scrubbing job to read-only due to insufficient disk space.%n%1
0xB000104BData Deduplication has disabled the volume %1 because there are missing or corrupt containers. Please run deep scrubbing on the volume.%n%2 Data Deduplication has disabled the volume %1 because there are missing or corrupt containers. Please run deep scrubbing on the volume.%n%2
0xB000104DData Deduplication encountered a disk-full error.%n%1 Data Deduplication encountered a disk-full error.%n%1
0xB000104EData Deduplication job cannot run on volume \"%1\" due to insufficient disk space.%n%2 Data Deduplication job cannot run on volume \"%1\" due to insufficient disk space.%n%2
0xB000104FData Deduplication job cannot run on offline volume \"%1\".%n%2 Data Deduplication job cannot run on offline volume \"%1\".%n%2
0xB0001050Data Deduplication recovered a corrupt or missing file.%n%1 Data Deduplication recovered a corrupt or missing file.%n%1
0xB0001051Data Deduplication encountered a corrupted metadata file. To correct the problem, schedule or manually run a Garbage Collection job on the affected volume with the -Full option.%n%1 Data Deduplication encountered a corrupted metadata file. To correct the problem, schedule or manually run a Garbage Collection job on the affected volume with the -Full option.%n%1
0xB0001052Data Deduplication encountered chunk %1 with corrupted header while updating container. The corrupted chunk is replicated to the new container %2.%n%3 Data Deduplication encountered chunk %1 with corrupted header while updating container. The corrupted chunk is replicated to the new container %2.%n%3
0xB0001053Data Deduplication encountered chunk %1 with transient header corruption while updating container. The corrupted chunk is NOT replicated to the new container %2.%n%3 Data Deduplication encountered chunk %1 with transient header corruption while updating container. The corrupted chunk is NOT replicated to the new container %2.%n%3
0xB0001054Data Deduplication failed to read chunk container redirection table from file %1 with error %2.%n%3 Data Deduplication failed to read chunk container redirection table from file %1 with error %2.%n%3
0xB0001055Data Deduplication failed to initialize reparse point index table for deep scrubbing from file %1 with error %2.%n%3 Data Deduplication failed to initialize reparse point index table for deep scrubbing from file %1 with error %2.%n%3
0xB0001056Data Deduplication failed to deep scrub container file %1 on volume %2 with error %3.%n%4 Data Deduplication failed to deep scrub container file %1 on volume %2 with error %3.%n%4
0xB0001057Data Deduplication failed to load stream map log for deep scrubbing from file %1 with error %2.%n%3 Data Deduplication failed to load stream map log for deep scrubbing from file %1 with error %2.%n%3
0xB0001058Data Deduplication found a duplicate local chunk id %1 in container file %2.%n%3 Data Deduplication found a duplicate local chunk id %1 in container file %2.%n%3
0xB0001059Data Deduplication job type \"%1\" on volume \"%2\" was cancelled manually.%n%3 Data Deduplication job type \"%1\" on volume \"%2\" was cancelled manually.%n%3
0xB000105AScheduled data Deduplication job type \"%1\" on volume \"%2\" was cancelled.%n%3 Scheduled data Deduplication job type \"%1\" on volume \"%2\" was cancelled.%n%3
0xB000105DThe Data Deduplication chunk store statistics file on volume \"%1\" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2 The Data Deduplication chunk store statistics file on volume \"%1\" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2
0xB000105EThe Data Deduplication volume statistics file on volume \"%1\" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2 The Data Deduplication volume statistics file on volume \"%1\" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2
0xB000105FData Deduplication failed to append to deep scrubbing log file %1 with error %2.%n%3 Data Deduplication failed to append to deep scrubbing log file %1 with error %2.%n%3
0xB0001060Data Deduplication encountered a failure during deep scrubbing on store %1 with error %2.%n%3 Data Deduplication encountered a failure during deep scrubbing on store %1 with error %2.%n%3
0xB0001061Data Deduplication cancelled job type \"%1\" on volume \"%2\". The job violated Csv dedup job placement policy.%n%3 Data Deduplication cancelled job type \"%1\" on volume \"%2\". The job violated Csv dedup job placement policy.%n%3
0xB0001062Data Deduplication cancelled job type \"%1\" on volume \"%2\". The csv job monitor has been uninitialized.%n%3 Data Deduplication cancelled job type \"%1\" on volume \"%2\". The csv job monitor has been uninitialized.%n%3
0xB0001063Data Deduplication encountered a IO device error while accessing a file on the volume. This is likely a hardware fault in the storage subsystem.%n%1 Data Deduplication encountered a IO device error while accessing a file on the volume. This is likely a hardware fault in the storage subsystem.%n%1
0xB0001064Data Deduplication encountered an unexpected error. If this is a cluster, verify Data Deduplication is enabled on all nodes of the cluster.%n%1 Data Deduplication encountered an unexpected error. If this is a cluster, verify Data Deduplication is enabled on all nodes of the cluster.%n%1
0xB0001065Attempted to disable data access for data deduplicated CSV volume \"%1\" without maintenance mode. Data access can only be disabled for a CSV volume when in maintenance mode. Place volume into maintenance mode and retry.%n%2 Attempted to disable data access for data deduplicated CSV volume \"%1\" without maintenance mode. Data access can only be disabled for a CSV volume when in maintenance mode. Place volume into maintenance mode and retry.%n%2
0xB0001800Data Deduplication service could not unoptimize file \"%5%6%7\". Error %8, \"%9\". Data Deduplication service could not unoptimize file \"%5%6%7\". Error %8, \"%9\".
0xB0001801Data Deduplication service failed to unoptimize too many files %3. Some files are not reported. Data Deduplication service failed to unoptimize too many files %3. Some files are not reported.
0xB0001802Data Deduplication service has finished unoptimization on volume %3 with no errors. Data Deduplication service has finished unoptimization on volume %3 with no errors.
0xB0001803Data Deduplication service has finished unoptimization on volume %3 with %4 errors. Data Deduplication service has finished unoptimization on volume %3 with %4 errors.
0xB0001804%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10 %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10
0xB0001805%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nPriority: %7%nFull: %8%nVolume free space (MB): %9 %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nPriority: %7%nFull: %8%nVolume free space (MB): %9
0xB0001806%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6%nFull: %7%nRead-only: %8 %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6%nFull: %7%nRead-only: %8
0xB0001807%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6 %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6
0xB0001809%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nIn-policy file count: %12%nJob processed space (MB): %13%nJob elapsed time (seconds): %18%nJob throughput (MB/second): %19%nChurn processing throughput (MB/second): %20 %1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nIn-policy file count: %12%nJob processed space (MB): %13%nJob elapsed time (seconds): %18%nJob throughput (MB/second): %19%nChurn processing throughput (MB/second): %20
0xB000180A%1 job has completed.%n%nFull: %2%nVolume: %5 (%4)%nError code: %6%nError message: %7%nFreed up space (MB): %8%nVolume free space (MB): %9%nJob elapsed time (seconds): %10%nJob throughput (MB/second): %11 %1 job has completed.%n%nFull: %2%nVolume: %5 (%4)%nError code: %6%nError message: %7%nFreed up space (MB): %8%nVolume free space (MB): %9%nJob elapsed time (seconds): %10%nJob throughput (MB/second): %11
0xB000180B%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6 %1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6
0xB000180C%1 job has completed.%n%nFull: %2%nRead-only: %3%nVolume: %6 (%5)%nError code: %7%nError message: %8%nTotal corruption count: %9%nFixable corruption count: %10%n%nWhen corruptions are found, check more details in Scrubbing event channel. %1 job has completed.%n%nFull: %2%nRead-only: %3%nVolume: %6 (%5)%nError code: %7%nError message: %8%nTotal corruption count: %9%nFixable corruption count: %10%n%nWhen corruptions are found, check more details in Scrubbing event channel.
0xB000180D%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nUnoptimized file count: %7%nJob processed space (MB): %8%nJob elapsed time (seconds): %9%nJob throughput (MB/second): %10 %1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nUnoptimized file count: %7%nJob processed space (MB): %8%nJob elapsed time (seconds): %9%nJob throughput (MB/second): %10
0xB000180E%1 job has been queued.%n%nVolume: %4 (%3)%nSystem memory percent: %5 %nPriority: %6%nSchedule mode: %7 %1 job has been queued.%n%nVolume: %4 (%3)%nSystem memory percent: %5 %nPriority: %6%nSchedule mode: %7
0xB000181CRestore of deduplicated file \"%1\" failed with the following error: %2, \"%3\". Restore of deduplicated file \"%1\" failed with the following error: %2, \"%3\".
0xB000181DPriority %1 job has started.%n%nVolume: %4 (%3)%nFile ID: %11%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10 Priority %1 job has started.%n%nVolume: %4 (%3)%nFile ID: %11%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10
0xB000181E%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable threads: %6%nPriority: %7 %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable threads: %6%nPriority: %7
0xB000181F%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nChunk lookup count: %12%nInserted chunk count: %13%nInserted chunks logical data (MB): %14%nInserted chunks physical data (MB): %15%nCommitted stream count: %16%nCommitted stream entry count: %17%nCommitted stream logical data (MB): %18%nRetrieved chunks physical data (MB): %19%nRetrieved stream logical data (MB): %20%nDataPort time (seconds): %21%nJob elapsed time (seconds): %22%nIngress throughput (MB/second): %23%nEgress throughput (MB/second): %24 %1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nChunk lookup count: %12%nInserted chunk count: %13%nInserted chunks logical data (MB): %14%nInserted chunks physical data (MB): %15%nCommitted stream count: %16%nCommitted stream entry count: %17%nCommitted stream logical data (MB): %18%nRetrieved chunks physical data (MB): %19%nRetrieved stream logical data (MB): %20%nDataPort time (seconds): %21%nJob elapsed time (seconds): %22%nIngress throughput (MB/second): %23%nEgress throughput (MB/second): %24
0xB0001821Data Deduplication detected a non-clustered volume specified for the chunk index cache volume in a clustered deployment. The configuration is not recommended because it may result in job failures after failover.%n%nVolume: %3 (%2) Data Deduplication detected a non-clustered volume specified for the chunk index cache volume in a clustered deployment. The configuration is not recommended because it may result in job failures after failover.%n%nVolume: %3 (%2)
0xB0002000Data Deduplication detected job type \"%1\" on volume \"%2\" working set is low. Ratio to commit size is %3.%n%4 Data Deduplication detected job type \"%1\" on volume \"%2\" working set is low. Ratio to commit size is %3.%n%4
0xB0002001Data Deduplication detected job type \"%1\" on volume \"%2\" working set has recovered to desirable level.%n%3 Data Deduplication detected job type \"%1\" on volume \"%2\" working set has recovered to desirable level.%n%3
0xB0002002Data Deduplication detected job type \"%1\" on volume \"%2\" page fault rate is high. The rate is %3 page faults per second.%n%4 Data Deduplication detected job type \"%1\" on volume \"%2\" page fault rate is high. The rate is %3 page faults per second.%n%4
0xB0002003Data Deduplication detected job type \"%1\" on volume \"%2\" page fault rate has lowered to desirable level. The rate is %3 page faults per second.%n%4 Data Deduplication detected job type \"%1\" on volume \"%2\" page fault rate has lowered to desirable level. The rate is %3 page faults per second.%n%4
0xB0002004Data Deduplication failed to dedup file \"%1\" with file ID %2 due to non-fatal error %3%n%4.%n%nNote: You can retrieve the file name by running the command FSUTIL FILE QUERYFILENAMEBYID on the file in question. Data Deduplication failed to dedup file \"%1\" with file ID %2 due to non-fatal error %3%n%4.%n%nNote: You can retrieve the file name by running the command FSUTIL FILE QUERYFILENAMEBYID on the file in question.
0xB000200CData Deduplication has aborted a group commit session.%n%nFile count: %1%nError: %2%n%3 Data Deduplication has aborted a group commit session.%n%nFile count: %1%nError: %2%n%3
0xB000201CFail to open dedup setting registry key Fail to open dedup setting registry key
0xB000201DData Deduplication failed to dedup file \"%1\" with file ID %2 due to oplock break%n%3 Data Deduplication failed to dedup file \"%1\" with file ID %2 due to oplock break%n%3
0xB000201EData Deduplication failed to load hotspot table from file %1 due to error %2.%n%3 Data Deduplication failed to load hotspot table from file %1 due to error %2.%n%3
0xB000201FData Deduplication failed to initialize oplock.%n%nFile ID: %1%nFile name: \"%2\"%nError: %3%n%4 Data Deduplication failed to initialize oplock.%n%nFile ID: %1%nFile name: \"%2\"%nError: %3%n%4
0xB0002020Data Deduplication while running job on volume %1 detected invalid physical sector size %2. Using default value %3.%n%4 Data Deduplication while running job on volume %1 detected invalid physical sector size %2. Using default value %3.%n%4
0xB0002021Data Deduplication detected an unsupported chunk store container.%n%1 Data Deduplication detected an unsupported chunk store container.%n%1
0xB0002022Data Deduplication could not create window to receive task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2 Data Deduplication could not create window to receive task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2
0xB0002023Data Deduplication could not create thread to poll for task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2 Data Deduplication could not create thread to poll for task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2
0xB0002024An attempt was made to perform an initialization operation when initialization has already been completed.%n%1 An attempt was made to perform an initialization operation when initialization has already been completed.%n%1
0xB0002028Data Deduplication created emergency file %1.%n%3 Data Deduplication created emergency file %1.%n%3
0xB0002029Data Deduplication failed to create emergency file %1 with error %2.%n%3 Data Deduplication failed to create emergency file %1 with error %2.%n%3
0xB000202AData Deduplication deleted emergency file %1.%n%3 Data Deduplication deleted emergency file %1.%n%3
0xB000202BData Deduplication failed to delete emergency file %1 with error %2.%n%3 Data Deduplication failed to delete emergency file %1 with error %2.%n%3
0xB000202CData Deduplication detected a chunk store container with misaligned valid data length.%n%1 Data Deduplication detected a chunk store container with misaligned valid data length.%n%1
0xB000202DData Deduplication Garbage Collection encountered a delete log entry with an invalid stream map signature for stream map Id %1.%n%2 Data Deduplication Garbage Collection encountered a delete log entry with an invalid stream map signature for stream map Id %1.%n%2
0xB000202EData Deduplication failed to initialize oplock as the file appears to be missing.%n%nFile ID: %1%nFile name: \"%2\"%nError: %3%n%4 Data Deduplication failed to initialize oplock as the file appears to be missing.%n%nFile ID: %1%nFile name: \"%2\"%nError: %3%n%4
0xB000202FData Deduplication skipped too many file-level errors. We will not log more than %1 file-level errors per job.%n%2 Data Deduplication skipped too many file-level errors. We will not log more than %1 file-level errors per job.%n%2
0xB0002030Data Deduplication diagnostic warning.%n%n%1%n%2 Data Deduplication diagnostic warning.%n%n%1%n%2
0xB0002031Data Deduplication diagnostic information.%n%n%1%n%2 Data Deduplication diagnostic information.%n%n%1%n%2
0xB0002032Data Deduplication found file %1 with a stream map id %2 in container file %3 marked for deletion.%n%4 Data Deduplication found file %1 with a stream map id %2 in container file %3 marked for deletion.%n%4
0xB0002033Failed to enqueue job of type \"%1\" on volume \"%2\".%n%3 Failed to enqueue job of type \"%1\" on volume \"%2\".%n%3
0xB0002034Error terminating job host process for job type \"%1\" on volume \"%2\" (process id: %3).%n%4 Error terminating job host process for job type \"%1\" on volume \"%2\" (process id: %3).%n%4
0xB0002035Data Deduplication encountered corrupted chunk %1 while updating container. Corrupted data that cannot be repaired will be copied as-is to the new container %2.%n%3 Data Deduplication encountered corrupted chunk %1 while updating container. Corrupted data that cannot be repaired will be copied as-is to the new container %2.%n%3
0xB0002036Data Deduplication job type \"%1\" on volume \"%2\" failed to exit gracefully.%n%3 Data Deduplication job type \"%1\" on volume \"%2\" failed to exit gracefully.%n%3
0xB0002037Data Deduplication job host for job type \"%1\" on volume \"%2\" exited unexpectedly.%n%3 Data Deduplication job host for job type \"%1\" on volume \"%2\" exited unexpectedly.%n%3
0xB0002038Data Deduplication has failed to load corruption metadata file on the store at %1 due to error %2. Please run deep scrubbing on the volume.%n%3 Data Deduplication has failed to load corruption metadata file on the store at %1 due to error %2. Please run deep scrubbing on the volume.%n%3
0xB0002039Data Deduplication full garbage collection phase 1 on volume \"%1\" encountered an error %2 while processing file %3. Phase 1 will need to be aborted since garbage collection of file-related metadata is unsafe to continue on file errors.%n%4 Data Deduplication full garbage collection phase 1 on volume \"%1\" encountered an error %2 while processing file %3. Phase 1 will need to be aborted since garbage collection of file-related metadata is unsafe to continue on file errors.%n%4
0xB000203AData Deduplication has failed to process corruption metadata file %1 due to error %2. Please run deep scrubbing on the volume.%n%3 Data Deduplication has failed to process corruption metadata file %1 due to error %2. Please run deep scrubbing on the volume.%n%3
0xB000203BData Deduplication has failed to load a corrupted metadata file %1 due to error %2. Deleting the file and continuing.%n%3 Data Deduplication has failed to load a corrupted metadata file %1 due to error %2. Deleting the file and continuing.%n%3
0xB000203CData Deduplication has failed to set NTFS allocation size for container file %1 due to error %2.%n%3 Data Deduplication has failed to set NTFS allocation size for container file %1 due to error %2.%n%3
0xB000203DData Deduplication configured to use BCrypt provider '%1' for hash algorithm %2.%n%3 Data Deduplication configured to use BCrypt provider '%1' for hash algorithm %2.%n%3
0xB000203EData Deduplication could not use BCrypt provider '%1' for hash algorithm %2 due to an error in operation %3. Reverting to the Microsoft primitive CNG provider.%n%4 Data Deduplication could not use BCrypt provider '%1' for hash algorithm %2 due to an error in operation %3. Reverting to the Microsoft primitive CNG provider.%n%4
0xB000203FData Deduplication failed to include file \"%1\" in file metadata analysis calculations.%n%2 Data Deduplication failed to include file \"%1\" in file metadata analysis calculations.%n%2
0xB0002040Data Deduplication failed to include stream map %1 in file metadata analysis calculations.%n%2 Data Deduplication failed to include stream map %1 in file metadata analysis calculations.%n%2
0xB0002041Data Deduplication encountered an error for file \"%1\" while scanning files and folders.%n%2 Data Deduplication encountered an error for file \"%1\" while scanning files and folders.%n%2
0xB0002042Data Deduplication encountered an error while attempting to resume processing. Please consult the event log parameters for more details about the current file being processed.%n%1 Data Deduplication encountered an error while attempting to resume processing. Please consult the event log parameters for more details about the current file being processed.%n%1
0xB0002043Data Deduplication encountered an error %1 whle scanning usn journal on volume %2 for updating hot range tracking.%n%3 Data Deduplication encountered an error %1 whle scanning usn journal on volume %2 for updating hot range tracking.%n%3
0xB0002044Data Deduplication could not truncate the stream of an optimized file. No action is required. Error: %1%n%n%2 Data Deduplication could not truncate the stream of an optimized file. No action is required. Error: %1%n%n%2
0xB0002800%1 job memory requirements.%n%nVolume: %4 (%3)%nMinimum memory: %5 MB%nMaximum memory: %6 MB%nMinimum disk: %7 MB%nMaximum cores: %8 %1 job memory requirements.%n%nVolume: %4 (%3)%nMinimum memory: %5 MB%nMaximum memory: %6 MB%nMinimum disk: %7 MB%nMaximum cores: %8
0xB0002801%1 reconciliation has started.%n%nVolume: %4 (%3) %1 reconciliation has started.%n%nVolume: %4 (%3)
0xB0002802%1 reconciliation has completed.%n%nGuidance: This event is expected when Reconciliation has completed, there is no recommended or required action. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. %n%nVolume: %4 (%3)%nReconciled containers: %5%nUnreconciled containers: %6%nCatchup references: %7%nCatchup containers: %8%nReconciled references: %9%nReconciled containers: %10%nCross-reconciled references: %11%nCross-reconciled containers: %12%nError code: %13%nError message: %14 %1 reconciliation has completed.%n%nGuidance: This event is expected when Reconciliation has completed, there is no recommended or required action. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. %n%nVolume: %4 (%3)%nReconciled containers: %5%nUnreconciled containers: %6%nCatchup references: %7%nCatchup containers: %8%nReconciled references: %9%nReconciled containers: %10%nCross-reconciled references: %11%nCross-reconciled containers: %12%nError code: %13%nError message: %14
0xB0002803%1 job on volume %4 (%3) was configured with insufficient memory.%n%nSystem memory percentage: %5%nAvailable memory: %8 MB%nMinimum required memory: %6 MB %1 job on volume %4 (%3) was configured with insufficient memory.%n%nSystem memory percentage: %5%nAvailable memory: %8 MB%nMinimum required memory: %6 MB
0xB0002804Optimization memory details for %1 job on volume %3 (%2). Optimization memory details for %1 job on volume %3 (%2).
0xB0002805An open file was skipped during optimization. No action is required.%n%nFileId: %2%nSkip Reason: %1 An open file was skipped during optimization. No action is required.%n%nFileId: %2%nSkip Reason: %1
0xB0002806An operation succeeded after one or more retries. Operation: %1; FileId: %3; Number of retries: %2 An operation succeeded after one or more retries. Operation: %1; FileId: %3; Number of retries: %2
0xB0002807Data Deduplication aborted the optimization pipeline.%nVolumePath: %1%nErrorCode: %2%nErrorMessage: %3Details: %4 Data Deduplication aborted the optimization pipeline.%nVolumePath: %1%nErrorCode: %2%nErrorMessage: %3Details: %4
0xB0002808Data Deduplication aborted a file.%nFileId: %1%nFilePath: %2%nFileSize: %3%nFlags: %4%nTotalRanges: %5%nSkippedRanges: %6%nAbortedRanges: %7%nCommittedRanges: %8%nErrorCode: %9%nErrorMessage: %10Details: %11 Data Deduplication aborted a file.%nFileId: %1%nFilePath: %2%nFileSize: %3%nFlags: %4%nTotalRanges: %5%nSkippedRanges: %6%nAbortedRanges: %7%nCommittedRanges: %8%nErrorCode: %9%nErrorMessage: %10Details: %11
0xB0002809Data Deduplication aborted a file range.%nFileId: %1%nFilePath: %2%nRangeOffset: %3%nRangeLength: %4%nErrorCode: %5%nErrorMessage: %6Details: %7 Data Deduplication aborted a file range.%nFileId: %1%nFilePath: %2%nRangeOffset: %3%nRangeLength: %4%nErrorCode: %5%nErrorMessage: %6Details: %7
0xB000280AData Deduplication aborted a session.%nMaxSize: %1%nCurrentSize: %2%nRemainingRanges: %3%nErrorCode: %4%nErrorMessage: %5Details: %6 Data Deduplication aborted a session.%nMaxSize: %1%nCurrentSize: %2%nRemainingRanges: %3%nErrorCode: %4%nErrorMessage: %5Details: %6
0xB000280BUSN journal created.%n%nVolume: %2 (%1)%nMaximum size %3 MB%nAllocation size %4 MB USN journal created.%n%nVolume: %2 (%1)%nMaximum size %3 MB%nAllocation size %4 MB
0xB000280CDataPort memory details for %1 job on volume %3 (%2). DataPort memory details for %1 job on volume %3 (%2).
0xB000280DData deduplication detected a file with an ID that is not supported. Files with identifiers unpackable into 64-bits will be skipped. FileId: %1 FileName: %2 Data deduplication detected a file with an ID that is not supported. Files with identifiers unpackable into 64-bits will be skipped. FileId: %1 FileName: %2
0xB000280EReconciliation should be run to ensure optimal savings.%n%nGuidance: This event is expected when Reconciliation is turned off for the DataPort job. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. When Reconciliation would require 50% or more of the memory on the system, it is recommended that you (temporarily) cease running a DataPort job against this volume, and run an Optimization job. If Reconciliation is not run through an Optimization job before Reconciliation would require more than 100% of system memory, Reconciliation will not be able to be run again (unless more memory is added). This would result in permanent decreased space efficiency on this volume.%n%nVolume: %2 (%1)%nMemory percentage required: %3 Reconciliation should be run to ensure optimal savings.%n%nGuidance: This event is expected when Reconciliation is turned off for the DataPort job. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. When Reconciliation would require 50% or more of the memory on the system, it is recommended that you (temporarily) cease running a DataPort job against this volume, and run an Optimization job. If Reconciliation is not run through an Optimization job before Reconciliation would require more than 100% of system memory, Reconciliation will not be able to be run again (unless more memory is added). This would result in permanent decreased space efficiency on this volume.%n%nVolume: %2 (%1)%nMemory percentage required: %3
0xB000280FData Deduplication optimization job will not run the reconciliation step due to inadequate memory.%n%nGuidance: Deduplication savings will be suboptimal until the optimization job is provided more memory, or more more memory is added to the system.%n%nVolume: %2 (%1)%nMemory percentage required: %3 Data Deduplication optimization job will not run the reconciliation step due to inadequate memory.%n%nGuidance: Deduplication savings will be suboptimal until the optimization job is provided more memory, or more more memory is added to the system.%n%nVolume: %2 (%1)%nMemory percentage required: %3
0xB0003200Data Deduplication service detected corruption in \"%5%6%7\". The corruption cannot be repaired. Data Deduplication service detected corruption in \"%5%6%7\". The corruption cannot be repaired.
0xB0003201Data Deduplication service detected corruption (%7) in \"%6\". See the event details for more information. Data Deduplication service detected corruption (%7) in \"%6\". See the event details for more information.
0xB0003202Data Deduplication service detected a corrupted item (%11 - %13, %8, %9, %10, %12) in Deduplication Chunk Store on volume %4. See the event details for more information. Data Deduplication service detected a corrupted item (%11 - %13, %8, %9, %10, %12) in Deduplication Chunk Store on volume %4. See the event details for more information.
0xB0003203Data Deduplication service has finished scrubbing on volume %3. It did not find any corruption since the last scrubbing. Data Deduplication service has finished scrubbing on volume %3. It did not find any corruption since the last scrubbing.
0xB0003204Data Deduplication service found %4 corruption(s) on volume %3. All corruptions are fixed. Data Deduplication service found %4 corruption(s) on volume %3. All corruptions are fixed.
0xB0003205Data Deduplication service found %4 corruption(s) on volume %3. %5 corruption(s) are fixed. %6 user file(s) are corrupted. %7 user file(s) are fixed. For the corrupted file list, see the Microsoft/Windows/Deduplication/Scrubbing events. Data Deduplication service found %4 corruption(s) on volume %3. %5 corruption(s) are fixed. %6 user file(s) are corrupted. %7 user file(s) are fixed. For the corrupted file list, see the Microsoft/Windows/Deduplication/Scrubbing events.
0xB0003206Data Deduplication service found too many corruptions on volume %3. Some corruptions are not reported. Data Deduplication service found too many corruptions on volume %3. Some corruptions are not reported.
0xB0003211Data Deduplication service has finished scrubbing on volume %3. See the event details for more information. Data Deduplication service has finished scrubbing on volume %3. See the event details for more information.
0xB0003212Data Deduplication service encountered error while processing file \"%5%6%7\". The error was %8. Data Deduplication service encountered error while processing file \"%5%6%7\". The error was %8.
0xB0003213Data Deduplication service encountered too many errors while processing file on volume %3. The threshold was %4. Some user file corruptions may not be reported. Data Deduplication service encountered too many errors while processing file on volume %3. The threshold was %4. Some user file corruptions may not be reported.
0xB0003214Data Deduplication service encountered error while detecting corruptions in chunk store on volume %3. The error was %4. The job is aborted. Data Deduplication service encountered error while detecting corruptions in chunk store on volume %3. The error was %4. The job is aborted.
0xB0003216Data Deduplication service encountered error while loading corruption logs on volume %3. The error was %4. The job continues. Some corruptions may not be detected. Data Deduplication service encountered error while loading corruption logs on volume %3. The error was %4. The job continues. Some corruptions may not be detected.
0xB0003217Data Deduplication service encountered error while cleaning up corruption logs on volume %3. The error was %4. Some corruptions may be reported again next time. Data Deduplication service encountered error while cleaning up corruption logs on volume %3. The error was %4. Some corruptions may be reported again next time.
0xB0003218Data Deduplication service encountered error while loading hotspots mapping from chunk store on volume %3. The error was %4. Some corruptions may not be repaired. Data Deduplication service encountered error while loading hotspots mapping from chunk store on volume %3. The error was %4. Some corruptions may not be repaired.
0xB0003219Data Deduplication service encountered error while determining corrupted user files on volume %3. The error was %4. Some user file corruptions may not be reported. Data Deduplication service encountered error while determining corrupted user files on volume %3. The error was %4. Some user file corruptions may not be reported.
0xB000321AData Deduplication service found %4 corruption(s) on volume %3. %6 user file(s) are corrupted. %7 user file(s) are fixable. Please run scrubbing job in read-write mode to attempt fixing reported corruptions. Data Deduplication service found %4 corruption(s) on volume %3. %6 user file(s) are corrupted. %7 user file(s) are fixable. Please run scrubbing job in read-write mode to attempt fixing reported corruptions.
0xB000321BData Deduplication service fixed corruption in \"%5%6%7\". Data Deduplication service fixed corruption in \"%5%6%7\".
0xB000321CData Deduplication service detected fixable corruption in \"%5%6%7\". Please run scrubbing job in read-write mode to fix this corruption. Data Deduplication service detected fixable corruption in \"%5%6%7\". Please run scrubbing job in read-write mode to fix this corruption.
0xB000321EData Deduplication service encountered error while repairing corruptions on volume %3. The error was %4. The repair is unsuccessful. Data Deduplication service encountered error while repairing corruptions on volume %3. The error was %4. The repair is unsuccessful.
0xB000321FData Deduplication service detected a corrupted item (%6, %7, %8, %9) in Deduplication Chunk Store on volume %4. See the event details for more information. Data Deduplication service detected a corrupted item (%6, %7, %8, %9) in Deduplication Chunk Store on volume %4. See the event details for more information.
0xB0003220Container (%8,%9) with user data is missing from the chunk store. Missing container may result from incomplete restore, incomplete migration or file-system corruption. Volume is disabled from further optimization. It is recommended to restore the volume prior to enabling the volume for further optimization. Container (%8,%9) with user data is missing from the chunk store. Missing container may result from incomplete restore, incomplete migration or file-system corruption. Volume is disabled from further optimization. It is recommended to restore the volume prior to enabling the volume for further optimization.
0xB0003221Data Deduplication service encountered error while scaning dedup user files on volume %3. The error was %4. Some user file corruptions may not be reported. Data Deduplication service encountered error while scaning dedup user files on volume %3. The error was %4. Some user file corruptions may not be reported.
0xB0003224Data Deduplication service detected potential data loss (%9) in \"%6\" due to sharing reparse data with file \"%8\". See the event details for more information. Data Deduplication service detected potential data loss (%9) in \"%6\" due to sharing reparse data with file \"%8\". See the event details for more information.
0xB0003225Container (%8,%9) with user data is corrupt in the chunk store. It is recommended to restore the volume prior to enabling the volume for further optimization. Container (%8,%9) with user data is corrupt in the chunk store. It is recommended to restore the volume prior to enabling the volume for further optimization.
0xB0005000Open stream store stream (StartingChunkId %1, FileId %2) Open stream store stream (StartingChunkId %1, FileId %2)
0xB0005001Open stream store stream completed %1 Open stream store stream completed %1
0xB0005002Prepare for paging IO (Stream %1, FileId %2) Prepare for paging IO (Stream %1, FileId %2)
0xB0005003Prepare for paging IO completed %1 Prepare for paging IO completed %1
0xB0005005Read stream map completed %1 Read stream map completed %1
0xB0005006Read chunks (Stream %1, FileId %2, IoType %3, FirstRequestChunkId %4, NextRequest %5) Read chunks (Stream %1, FileId %2, IoType %3, FirstRequestChunkId %4, NextRequest %5)
0xB0005007Read chunks completed %1 Read chunks completed %1
0xB0005008Compute checksum (ItemType %1, DataSize %2) Compute checksum (ItemType %1, DataSize %2)
0xB0005009Compute checksum completed %1 Compute checksum completed %1
0xB000500AGet container entry (ContainerId %1, Generation %2) Get container entry (ContainerId %1, Generation %2)
0xB000500BGet container entry completed %1 Get container entry completed %1
0xB000500CGet maximum generation for container (ContainerId %1, Generation %2) Get maximum generation for container (ContainerId %1, Generation %2)
0xB000500DGet maximum generation for container completed %1 Get maximum generation for container completed %1
0xB000500EOpen chunk container (ContainerId %1, Generation %2, RootPath %4) Open chunk container (ContainerId %1, Generation %2, RootPath %4)
0xB000500FOpen chunk container completed %1 Open chunk container completed %1
0xB0005010Initialize chunk container redirection table (ContainerId %1, Generation %2) Initialize chunk container redirection table (ContainerId %1, Generation %2)
0xB0005011Initialize chunk container redirection table completed %1 Initialize chunk container redirection table completed %1
0xB0005012Validate chunk container redirection table (ContainerId %1, Generation %2) Validate chunk container redirection table (ContainerId %1, Generation %2)
0xB0005013Validate chunk container redirection table completed %1 Validate chunk container redirection table completed %1
0xB0005014Get chunk container valid data length (ContainerId %1, Generation %2) Get chunk container valid data length (ContainerId %1, Generation %2)
0xB0005015Get chunk container valid data length completed %1 Get chunk container valid data length completed %1
0xB0005016Get offset from chunk container redirection table (ContainerId %1, Generation %2) Get offset from chunk container redirection table (ContainerId %1, Generation %2)
0xB0005017Get offset from chunk container redirection table completed %1 Get offset from chunk container redirection table completed %1
0xB0005018Read chunk container block (ContainerId %1, Generation %2, Buffer %3, Offset %4, Length %5, IoType %6, Synchronous %7) Read chunk container block (ContainerId %1, Generation %2, Buffer %3, Offset %4, Length %5, IoType %6, Synchronous %7)
0xB0005019Read chunk container block completed %1 Read chunk container block completed %1
0xB000501AClear chunk container block (Buffer %1, Size %2, BufferType %3) Clear chunk container block (Buffer %1, Size %2, BufferType %3)
0xB000501BClear chunk container block completed %1 Clear chunk container block completed %1
0xB000501CCopy chunk (Buffer %1, Size %2, BufferType %3, BufferOffset %4, OutputCapacity %5) Copy chunk (Buffer %1, Size %2, BufferType %3, BufferOffset %4, OutputCapacity %5)
0xB000501DCopy chunk completed %1 Copy chunk completed %1
0xB000501EInitialize file cache (UnderlyingFileObject %1, CacheFileSize %2) Initialize file cache (UnderlyingFileObject %1, CacheFileSize %2)
0xB000501FInitialize file cache completed %1 Initialize file cache completed %1
0xB0005020Map file cache data (CacheFileObject %1, Offset %2, Length %3) Map file cache data (CacheFileObject %1, Offset %2, Length %3)
0xB0005021Map file cache data completed %1 Map file cache data completed %1
0xB0005022Unpin file cache data(Bcb %1) Unpin file cache data(Bcb %1)
0xB0005023Unpin file cache data completed %1 Unpin file cache data completed %1
0xB0005024Copy file cache data (CacheFileObject %1, Offset %2, Length %3) Copy file cache data (CacheFileObject %1, Offset %2, Length %3)
0xB0005025Copy file cache data completed %1 Copy file cache data completed %1
0xB0005026Read underlying file cache data (CacheFileObject %1, UnderlyingFileObject %2, Offset %3, Length %4) Read underlying file cache data (CacheFileObject %1, UnderlyingFileObject %2, Offset %3, Length %4)
0xB0005027Read underlying file cache data completed %1 Read underlying file cache data completed %1
0xB0005028Get chunk container file size (ContainerId %1, Generation %2) Get chunk container file size (ContainerId %1, Generation %2)
0xB0005029Get chunk container file size completed %1 Get chunk container file size completed %1
0xB000502APin stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4) Pin stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4)
0xB000502BPin stream map completed %1 Pin stream map completed %1
0xB000502CPin chunk container (ContainerId %1, Generation %2) Pin chunk container (ContainerId %1, Generation %2)
0xB000502DPin chunk container completed %1 Pin chunk container completed %1
0xB000502EPin chunk (ContainerId %1, Generation %2) Pin chunk (ContainerId %1, Generation %2)
0xB000502FPin chunk completed %1 Pin chunk completed %1
0xB0005030Allocate pool buffer (ReadLength %1, PagingIo %2) Allocate pool buffer (ReadLength %1, PagingIo %2)
0xB0005031Allocate pool buffer completed %1 Allocate pool buffer completed %1
0xB0005032Unpin chunk container (ContainerId %1, Generation %2) Unpin chunk container (ContainerId %1, Generation %2)
0xB0005033Unpin chunk container completed %1 Unpin chunk container completed %1
0xB0005034Unpin chunk (ContainerId %1, Generation %2) Unpin chunk (ContainerId %1, Generation %2)
0xB0005035Unpin chunk completed %1 Unpin chunk completed %1
0xB0006028Dedup read processing (FileObject %1, Offset %2, Length %3, IoType %4) Dedup read processing (FileObject %1, Offset %2, Length %3, IoType %4)
0xB0006029Dedup read processing completed %1 Dedup read processing completed %1
0xB000602AGet first stream map entry (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4) Get first stream map entry (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4)
0xB000602BGet first stream map entry completed %1 Get first stream map entry completed %1
0xB000602CRead chunk metadata (Stream %1, CurrentOffset %2, AdjustedFinalOffset %3, FirstChunkByteOffset %4, ChunkRequestsEndOffset %5, TlCache %6) Read chunk metadata (Stream %1, CurrentOffset %2, AdjustedFinalOffset %3, FirstChunkByteOffset %4, ChunkRequestsEndOffset %5, TlCache %6)
0xB000602DRead chunk metadata completed %1 Read chunk metadata completed %1
0xB000602ERead chunk data (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4) Read chunk data (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4)
0xB000602FRead chunk data completed %1 Read chunk data completed %1
0xB0006030Reference TlCache data (TlCache %1, Stream %2) Reference TlCache data (TlCache %1, Stream %2)
0xB0006031Reference TlCache data completed %1 Reference TlCache data completed %1
0xB0006032Read chunk data from stream store (Stream %1) Read chunk data from stream store (Stream %1)
0xB0006033Read chunk data from stream store completed %1 Read chunk data from stream store completed %1
0xB0006035Assemble chunk data completed %1 Assemble chunk data completed %1
0xB0006037Decompress chunk data completed %1 Decompress chunk data completed %1
0xB0006038Copy chunk data in to user buffer (BytesCopied %1) Copy chunk data in to user buffer (BytesCopied %1)
0xB0006039Copy chunk data in to user buffer completed %1 Copy chunk data in to user buffer completed %1
0xB000603BInsert chunk data in to tlcache completed %1 Insert chunk data in to tlcache completed %1
0xB000603CRead data from dedup reparse point file (FileObject %1, Offset %2, Length %3) Read data from dedup reparse point file (FileObject %1, Offset %2, Length %3)
0xB000603EPrepare stream map (StreamContext %1) Prepare stream map (StreamContext %1)
0xB000603FPrepare stream map completed %1 Prepare stream map completed %1
0xB0006040Patch clean ranges (FileObject %1, Offset %2, Length %3) Patch clean ranges (FileObject %1, Offset %2, Length %3)
0xB0006041Patch clean ranges completed %1 Patch clean ranges completed %1
0xB0006042Writing data to dedup file (FileObject %1, Offset %2, Length %3, IoType %4) Writing data to dedup file (FileObject %1, Offset %2, Length %3, IoType %4)
0xB0006043Writing data to dedup file completed %1 Writing data to dedup file completed %1
0xB0006044Queue write request on dedup file (FileObject %1, Offset %2, Length %3) Queue write request on dedup file (FileObject %1, Offset %2, Length %3)
0xB0006045Queue write request on dedup file completed %1 Queue write request on dedup file completed %1
0xB0006046Do copy on write work on dedup file (FileObject %1, Offset %2, Length %3) Do copy on write work on dedup file (FileObject %1, Offset %2, Length %3)
0xB0006047Do copy on write work on dedup file completed %1 Do copy on write work on dedup file completed %1
0xB0006048Do full recall on dedup file (FileObject %1, Offset %2, Length %3) Do full recall on dedup file (FileObject %1, Offset %2, Length %3)
0xB0006049Do full recall on dedup file completed %1 Do full recall on dedup file completed %1
0xB000604ADo partial recall on dedup file (FileObject %1, Offset %2, Length %3) Do partial recall on dedup file (FileObject %1, Offset %2, Length %3)
0xB000604BDo partial recall on dedup file completed %1 Do partial recall on dedup file completed %1
0xB000604CDo dummy paging read on dedup file (FileObject %1, Offset %2, Length %3) Do dummy paging read on dedup file (FileObject %1, Offset %2, Length %3)
0xB000604DDo dummy paging read on dedup file completed %1 Do dummy paging read on dedup file completed %1
0xB000604ERead clean data for recalling file (FileObject %1, Offset %2, Length %3) Read clean data for recalling file (FileObject %1, Offset %2, Length %3)
0xB000604FRead clean data for recalling file completed %1 Read clean data for recalling file completed %1
0xB0006050Write clean data to dedup file normally (FileObject %1, Offset %2, Length %3) Write clean data to dedup file normally (FileObject %1, Offset %2, Length %3)
0xB0006051Write clean data to dedup file completed %1 Write clean data to dedup file completed %1
0xB0006052Write clean data to dedup file paged (FileObject %1, Offset %2, Length %3) Write clean data to dedup file paged (FileObject %1, Offset %2, Length %3)
0xB0006053Write clean data to dedup file paged completed %1 Write clean data to dedup file paged completed %1
0xB0006054Recall dedup file using paging Io (FileObject %1, Offset %2, Length %3) Recall dedup file using paging Io (FileObject %1, Offset %2, Length %3)
0xB0006055Recall dedup file using paging Io completed %1 Recall dedup file using paging Io completed %1
0xB0006056Flush dedup file after recall (FileObject %1) Flush dedup file after recall (FileObject %1)
0xB0006057Flush dedup file after recall completed %1 Flush dedup file after recall completed %1
0xB0006058Update bitmap after recall on dedup file (FileObject %1, Offset %2, Length %3) Update bitmap after recall on dedup file (FileObject %1, Offset %2, Length %3)
0xB0006059Update bitmap after recall on dedup file completed %1 Update bitmap after recall on dedup file completed %1
0xB000605ADelete dedup reparse point (FileObject %1) Delete dedup reparse point (FileObject %1)
0xB000605BDelete dedup reparse point completed %1 Delete dedup reparse point completed %1
0xB000605COpen dedup file (FilePath %1) Open dedup file (FilePath %1)
0xB000605DOpen dedup file completed %1 Open dedup file completed %1
0xB000605FLocking user buffer for read completed %1 Locking user buffer for read completed %1
0xB0006061Get system address for MDL completed %1 Get system address for MDL completed %1
0xB0006062Read clean dedup file (FileObject %1, Offset %2, Length %3) Read clean dedup file (FileObject %1, Offset %2, Length %3)
0xB0006063Read clean dedup file completed %1 Read clean dedup file completed %1
0xB0006064Get range state (Offset %1, Length %2) Get range state (Offset %1, Length %2)
0xB0006065Get range state completed %1 Get range state completed %1
0xB0006067Get chunk body completed %1 Get chunk body completed %1
0xB0006069Release chunk completed %1 Release chunk completed %1
0xB000606ARelease decompress chunk context (BufferSize %1) Release decompress chunk context (BufferSize %1)
0xB000606BRelease decompress chunk context completed %1 Release decompress chunk context completed %1
0xB000606CPrepare decompress chunk context (BufferSize %1) Prepare decompress chunk context (BufferSize %1)
0xB000606DPrepare decompress chunk context completed %1 Prepare decompress chunk context completed %1
0xB000606ECopy data to compressed buffer (BufferSize %1) Copy data to compressed buffer (BufferSize %1)
0xB000606FCopy data to compressed buffer completed %1 Copy data to compressed buffer completed %1
0xB0006071Release data from TL Cache completed %1 Release data from TL Cache completed %1
0xB0006072Queue async read request (FileObject %1, Offset %2, Length %3) Queue async read request (FileObject %1, Offset %2, Length %3)
0xB0006073Queue async read request complete %1 Queue async read request complete %1
0xB0015004Read stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4) Read stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4)
0xB1004000Create chunk container (%1 - %2.%3.ccc) Create chunk container (%1 - %2.%3.ccc)
0xB1004001Create chunk container completed %1 Create chunk container completed %1
0xB1004002Copy chunk container (%1 - %2.%3.ccc) Copy chunk container (%1 - %2.%3.ccc)
0xB1004003Copy chunk container completed %1 Copy chunk container completed %1
0xB1004004Delete chunk container (%1 - %2.%3.ccc) Delete chunk container (%1 - %2.%3.ccc)
0xB1004005Delete chunk container completed %1 Delete chunk container completed %1
0xB1004006Rename chunk container (%1 - %2.%3.ccc%4) Rename chunk container (%1 - %2.%3.ccc%4)
0xB1004007Rename chunk container completed %1 Rename chunk container completed %1
0xB1004008Flush chunk container (%1 - %2.%3.ccc) Flush chunk container (%1 - %2.%3.ccc)
0xB1004009Flush chunk container completed %1 Flush chunk container completed %1
0xB100400ARollback chunk container (%1 - %2.%3.ccc) Rollback chunk container (%1 - %2.%3.ccc)
0xB100400BRollback chunk container completed %1 Rollback chunk container completed %1
0xB100400CMark chunk container (%1 - %2.%3.ccc) read-only Mark chunk container (%1 - %2.%3.ccc) read-only
0xB100400DMark chunk container read-only completed %1 Mark chunk container read-only completed %1
0xB100400EWrite chunk container (%1 - %2.%3.ccc) redirection table at offset %4 (Entries: StartIndex %5, Count %6) Write chunk container (%1 - %2.%3.ccc) redirection table at offset %4 (Entries: StartIndex %5, Count %6)
0xB100400FWrite chunk container redirection table completed %1 Write chunk container redirection table completed %1
0xB1004011Write chunk container header completed %1 Write chunk container header completed %1
0xB1004013Insert data chunk header completed %1 Insert data chunk header completed %1
0xB1004015Insert data chunk body completed %1 with ChunkId %2 Insert data chunk body completed %1 with ChunkId %2
0xB1004019Write delete log header completed %1 Write delete log header completed %1
0xB100401BAppend delete log entries completed %1 Append delete log entries completed %1
0xB100401DDelete delete log completed %1 Delete delete log completed %1
0xB100401FRename delete log completed %1 Rename delete log completed %1
0xB1004021Write chunk container bitmap completed %1 Write chunk container bitmap completed %1
0xB1004023Delete chunk container bitmap completed %1 Delete chunk container bitmap completed %1
0xB1004024Write merge log (%5 - %6.%7.merge.log) header Write merge log (%5 - %6.%7.merge.log) header
0xB1004025Write merge log header completed %1 Write merge log header completed %1
0xB1004027Insert hotspot chunk header completed %1 Insert hotspot chunk header completed %1
0xB1004029Insert hotspot chunk body completed %1 with ChunkId %2 Insert hotspot chunk body completed %1 with ChunkId %2
0xB100402BInsert stream map chunk header completed %1 Insert stream map chunk header completed %1
0xB100402DInsert stream map chunk body completed %1 with ChunkId %2 Insert stream map chunk body completed %1 with ChunkId %2
0xB100402FAppend merge log entries completed %1 Append merge log entries completed %1
0xB1004030Delete merge log (%1 - %2.%3.merge.log) Delete merge log (%1 - %2.%3.merge.log)
0xB1004031Delete merge log completed %1 Delete merge log completed %1
0xB1004032Flush merge log (%1 - %2.%3.merge.log) Flush merge log (%1 - %2.%3.merge.log)
0xB1004033Flush merge log completed %1 Flush merge log completed %1
0xB1004034Update file list entries (Remove: %1, Add: %2) Update file list entries (Remove: %1, Add: %2)
0xB1004035Update file list entries completed %1 Update file list entries completed %1
0xB1004036Set dedup reparse point on %2 (FileId %1) (ReparsePoint: SizeBackedByChunkStore %3, StreamMapInfoSize %4, StreamMapInfo %5) Set dedup reparse point on %2 (FileId %1) (ReparsePoint: SizeBackedByChunkStore %3, StreamMapInfoSize %4, StreamMapInfo %5)
0xB1004037Set dedup reparse point completed %1 (%2) Set dedup reparse point completed %1 (%2)
0xB1004038Set dedup zero data on %2 (FileId %1) Set dedup zero data on %2 (FileId %1)
0xB1004039Set dedup zero data completed %1 Set dedup zero data completed %1
0xB100403AFlush reparse point files Flush reparse point files
0xB100403BFlush reparse point files completed %1 Flush reparse point files completed %1
0xB100403CSet sparse on file id %1 Set sparse on file id %1
0xB100403DSet sparse completed %1 Set sparse completed %1
0xB100403EFSCTL_SET_ZERO_DATA on file id %1 at offset %2 and BeyondFinalZero %3 FSCTL_SET_ZERO_DATA on file id %1 at offset %2 and BeyondFinalZero %3
0xB100403FFSCTL_SET_ZERO_DATA completed %1 FSCTL_SET_ZERO_DATA completed %1
0xB1004040Rename chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) Rename chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4)
0xB1004041Rename chunk container bitmap completed %1 Rename chunk container bitmap completed %1
0xB1004042Insert padding chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7) Insert padding chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7)
0xB1004043Insert padding chunk header completed %1 Insert padding chunk header completed %1
0xB1004044Insert padding chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) Insert padding chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)
0xB1004045Insert padding chunk body completed %1 with ChunkId %2 Insert padding chunk body completed %1 with ChunkId %2
0xB1004046Insert batch of chunks to chunk container (%1 - %2.%3.ccc) at offset %4 (BatchChunkCount %5, BatchDataSize %6) Insert batch of chunks to chunk container (%1 - %2.%3.ccc) at offset %4 (BatchChunkCount %5, BatchDataSize %6)
0xB1004047Insert batch of chunks completed %1 Insert batch of chunks completed %1
0xB1004049Write chunk container directory completed %1 Write chunk container directory completed %1
0xB100404BDelete chunk container directory completed %1 Delete chunk container directory completed %1
0xB100404CRename chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) Rename chunk container directory (%1 - %2) for chunk container (%1 - %3.%4)
0xB100404DRename chunk container directory completed %1 Rename chunk container directory completed %1
0xB1014010Write chunk container (%5 - %6.%7.ccc) header at offset %8 (Header: USN %9, VDL %10, #Chunk %11, NextLocalId %12, Flags %13, LastAppendTime %14, BackupRedirectionTableOfset %15, LastReconciliationLocalId %16) Write chunk container (%5 - %6.%7.ccc) header at offset %8 (Header: USN %9, VDL %10, #Chunk %11, NextLocalId %12, Flags %13, LastAppendTime %14, BackupRedirectionTableOfset %15, LastReconciliationLocalId %16)
0xB1014012Insert data chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7) Insert data chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7)
0xB1014014Insert data chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) Insert data chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)
0xB1014018Write delete log (%5 - %6.%7.delete.log) header Write delete log (%5 - %6.%7.delete.log) header
0xB101401AAppend delete log (%1 - %2.%3.delete.log) entries at offset %4 (Entries: StartIndex %5, Count %6) Append delete log (%1 - %2.%3.delete.log) entries at offset %4 (Entries: StartIndex %5, Count %6)
0xB101401CDelete delete log (%1 - %2.%3.delete.log) Delete delete log (%1 - %2.%3.delete.log)
0xB101401ERename delete log (%1 - %2.%3.delete.log) Rename delete log (%1 - %2.%3.delete.log)
0xB1014020Write chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) (Bitmap: BitLength %5, StartIndex %6, Count %7) Write chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) (Bitmap: BitLength %5, StartIndex %6, Count %7)
0xB1014022Delete chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) Delete chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4)
0xB1014026Insert hotspot chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) Insert hotspot chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)
0xB1014028Insert hotspot chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) Insert hotspot chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)
0xB101402AInsert stream map chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) Insert stream map chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7)
0xB1014048Write chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) (Directory: EntryCount %5) Write chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) (Directory: EntryCount %5)
0xB101404ADelete chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) Delete chunk container directory (%1 - %2) for chunk container (%1 - %3.%4)
0xB102402EAppend merge log (%1 - %2.%3.merge.log) entries at offset %4 (Entries: StartIndex %5, Count %6) Append merge log (%1 - %2.%3.merge.log) entries at offset %4 (Entries: StartIndex %5, Count %6)
0xB103402CInsert stream map chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) (Entries: StartIndex %8, Count %9) Insert stream map chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) (Entries: StartIndex %8, Count %9)
0xD0000001Chunk header Chunk header
0xD0000002Chunk body Chunk body
0xD0000003Container header Container header
0xD0000004Container redirection table Container redirection table
0xD0000005Hotspot table Hotspot table
0xD0000006Delete log header Delete log header
0xD0000007Delete log entry Delete log entry
0xD0000008GC bitmap header GC bitmap header
0xD0000009GC bitmap entry GC bitmap entry
0xD000000AMerge log header Merge log header
0xD000000BMerge log entry Merge log entry
0xD000000CData Data
0xD000000EHotspot Hotspot
0xD000000FOptimization Optimization
0xD0000010Garbage Collection Garbage Collection
0xD0000011Scrubbing Scrubbing
0xD0000012Unoptimization Unoptimization
0xD0000013Analysis Analysis
0xD0000014Low Low
0xD0000015Normal Normal
0xD0000016High High
0xD0000017Cache Cache
0xD0000018Non-cache Non-cache
0xD0000019Paging Paging
0xD000001AMemory map Memory map
0xD000001BPaging memory map Paging memory map
0xD000001CNone None
0xD000001DPool Pool
0xD000001EPoolAligned PoolAligned
0xD000001FMDL MDL
0xD0000020Map Map
0xD0000021Cached Cached
0xD0000022NonCached NonCached
0xD0000023Paged Paged
0xD0000024container file container file
0xD0000025file list file file list file
0xD0000026file list header file list header
0xD0000027file list entry file list entry
0xD0000028primary file list file primary file list file
0xD0000029backup file list file backup file list file
0xD000002AScheduled Scheduled
0xD000002BManual Manual
0xD000002Crecall bitmap header recall bitmap header
0xD000002Drecall bitmap body recall bitmap body
0xD000002Erecall bitmap missing recall bitmap missing
0xD000002FRecall bitmap Recall bitmap
0xD0000030Unknown Unknown
0xD0000031The pipeline handle was closed The pipeline handle was closed
0xD0000032The file was deleted The file was deleted
0xD0000033The file was overwritten The file was overwritten
0xD0000034The file was recalled The file was recalled
0xD0000035A transaction was started on the file A transaction was started on the file
0xD0000036The file was encrypted The file was encrypted
0xD0000037The file was compressed The file was compressed
0xD0000038Set Zero Data was called on the file Set Zero Data was called on the file
0xD0000039Extended Attributes were set on the file Extended Attributes were set on the file
0xD000003AA section was created on the file A section was created on the file
0xD000003BThe file was shrunk The file was shrunk
0xD000003CA long-running IO operation prevented optimization A long-running IO operation prevented optimization
0xD000003DAn IO operation failed An IO operation failed
0xD000003ENotifying Optimization Notifying Optimization
0xD000003FSetting the Reparse Point Setting the Reparse Point
0xD0000040Truncating the file Truncating the file
0xD0000041DataPort DataPort
0xD1000002LZNT1 LZNT1
0xD1000003Xpress Xpress
0xD1000004Xpreff Huff Xpreff Huff
0xD1000006Standard Standard
0xD1000007Max Max
0xD1000008Hybrid Hybrid
0xF0000002Bad checksum Bad checksum
0xF0000003Inconsistent metadata Inconsistent metadata
0xF0000004Invalid header metadata Invalid header metadata
0xF0000005Missing file Missing file
0xF0000006Bad checksum (storage subsystem) Bad checksum (storage subsystem)
0xF0000007Corruption (storage subsystem) Corruption (storage subsystem)
0xF0000008Corruption (missing metadata) Corruption (missing metadata)
0xF0000009Possible data loss (duplicate reparse data) Possible data loss (duplicate reparse data)

EXIF

File Name:ddputils.dll.mui
Directory:%WINDIR%\WinSxS\amd64_microsoft-windows-dedup-common.resources_31bf3856ad364e35_10.0.15063.0_en-gb_2ca43a977656e41e\
File Size:126 kB
File Permissions:rw-rw-rw-
File Type:Win32 DLL
File Type Extension:dll
MIME Type:application/octet-stream
Machine Type:Intel 386 or later, and compatibles
Time Stamp:0000:00:00 00:00:00
PE Type:PE32
Linker Version:14.10
Code Size:0
Initialized Data Size:129024
Uninitialized Data Size:0
Entry Point:0x0000
OS Version:10.0
Image Version:10.0
Subsystem Version:6.0
Subsystem:Windows GUI
File Version Number:10.0.15063.0
Product Version Number:10.0.15063.0
File Flags Mask:0x003f
File Flags:(none)
File OS:Windows NT 32-bit
Object File Type:Executable application
File Subtype:0
Language Code:English (British)
Character Set:Unicode
Company Name:Microsoft Corporation
File Description:Microsoft Data Deduplication Common Library
File Version:10.0.15063.0 (WinBuild.160101.0800)
Internal Name:ddputils.lib
Legal Copyright:© Microsoft Corporation. All rights reserved.
Original File Name:ddputils.lib.mui
Product Name:Microsoft® Windows® Operating System
Product Version:10.0.15063.0

What is ddputils.dll.mui?

ddputils.dll.mui is Multilingual User Interface resource file that contain English (British) language for file ddputils.dll (Microsoft Data Deduplication Common Library).

File version info

File Description:Microsoft Data Deduplication Common Library
File Version:10.0.15063.0 (WinBuild.160101.0800)
Company Name:Microsoft Corporation
Internal Name:ddputils.lib
Legal Copyright:© Microsoft Corporation. All rights reserved.
Original Filename:ddputils.lib.mui
Product Name:Microsoft® Windows® Operating System
Product Version:10.0.15063.0
Translation:0x809, 1200