100 | ddp |
ddp |
1001 | Operation: |
Operation: |
1002 | Context: |
Context: |
1003 | Error-specific details: |
Error-specific details: |
1004 | Failure: |
Failure: |
1011 | Error |
Error |
1012 | Volume name |
Volume name |
1013 | Shadow copy volume |
Shadow copy volume |
1014 | Configuration file |
Configuration file |
1015 | The domain controller is unavailable. |
The domain controller is unavailable. |
1016 | Server |
Server |
1017 | Domain |
Domain |
1018 | File name |
File name |
1020 | Directory |
Directory |
1021 | Chunk store |
Chunk store |
1022 | Chunk ID |
Chunk ID |
1023 | Stream map |
Stream map |
1024 | Chunk store container |
Chunk store container |
1025 | File path |
File path |
1026 | File ID |
File ID |
1027 | Chunk size |
Chunk size |
1028 | Chunk offset |
Chunk offset |
1029 | Chunk flags |
Chunk flags |
1030 | Recorded time |
Recorded time |
1031 | Error message |
Error message |
1034 | Source context |
Source context |
1037 | Inner error context |
Inner error context |
1038 | Error timestamp |
Error timestamp |
1039 | File offset |
File offset |
1040 | Failure reason |
Failure reason |
1041 | Retry count |
Retry count |
1042 | Request ID |
Request ID |
1043 | Stream map count |
Stream map count |
1044 | Chunk count |
Chunk count |
1045 | Data size |
Data size |
2001 | Starting File Server Deduplication Service. |
Starting File Server Deduplication Service. |
2002 | Stopping the Data Deduplication service. |
Stopping the Data Deduplication service. |
2003 | Checking the File Server Deduplication global configuration store. |
Checking the File Server Deduplication global configuration store. |
2101 | Initializing the data deduplication mini-filter. |
Initializing the data deduplication mini-filter. |
2105 | Sending backup components list to VSS system. |
Sending backup components list to VSS system. |
2106 | Preparing for backup. |
Preparing for backup. |
2107 | Performing pre-restore operations. |
Performing pre-restore operations. |
2108 | Performing post-restore operations. |
Performing post-restore operations. |
2110 | Processing File Server Deduplication event. |
Processing File Server Deduplication event. |
2111 | Creating a chunk store. |
Creating a chunk store. |
2112 | Initializing chunk store. |
Initializing chunk store. |
2113 | Uninitializing chunk store. |
Uninitializing chunk store. |
2114 | Creating a chunk store session. |
Creating a chunk store session. |
2115 | Committing a chunk store session. |
Committing a chunk store session. |
2116 | Aborting a chunk store session. |
Aborting a chunk store session. |
2117 | Initiating creation of a chunk store stream. |
Initiating creation of a chunk store stream. |
2118 | Inserting a new chunk to a chunk store stream. |
Inserting a new chunk to a chunk store stream. |
2119 | Inserting an existing chunk to a chunk stream. |
Inserting an existing chunk to a chunk stream. |
2120 | Committing creation of a chunk store stream. |
Committing creation of a chunk store stream. |
2121 | Aborting creation of a chunk store stream. |
Aborting creation of a chunk store stream. |
2122 | Committing changes to a chunk store container. |
Committing changes to a chunk store container. |
2123 | Changes made to a chunk store container have been flushed to disk. |
Changes made to a chunk store container have been flushed to disk. |
2124 | Making a new chunk store container ready to use. |
Making a new chunk store container ready to use. |
2125 | Rolling back the last committed changes to a chunk store container. |
Rolling back the last committed changes to a chunk store container. |
2126 | Marking a chunk store container as read-only. |
Marking a chunk store container as read-only. |
2127 | Enumerating all containers in a chunk store. |
Enumerating all containers in a chunk store. |
2128 | Preparing a chunk store container for chunk insertion. |
Preparing a chunk store container for chunk insertion. |
2129 | Initializing a new chunk store container. |
Initializing a new chunk store container. |
2130 | Opening an existing chunk store container. |
Opening an existing chunk store container. |
2131 | Inserting a new chunk to a chunk store container. |
Inserting a new chunk to a chunk store container. |
2132 | Repairing a chunk store stamp file. |
Repairing a chunk store stamp file. |
2133 | Creating a chunk store stamp file. |
Creating a chunk store stamp file. |
2134 | Opening a chunk store stream. |
Opening a chunk store stream. |
2135 | Reading stream map entries from a chunk store stream. |
Reading stream map entries from a chunk store stream. |
2136 | Reading a chunk store chunk. |
Reading a chunk store chunk. |
2137 | Closing a chunk store stream. |
Closing a chunk store stream. |
2138 | Reading a chunk store container. |
Reading a chunk store container. |
2139 | Opening a chunk store container log file. |
Opening a chunk store container log file. |
2140 | Reading a chunk store container log file. |
Reading a chunk store container log file. |
2141 | Writing entries to a chunk store container log file. |
Writing entries to a chunk store container log file. |
2142 | Enumerating chunk store container log files. |
Enumerating chunk store container log files. |
2143 | Deleting chunk store container log files. |
Deleting chunk store container log files. |
2144 | Reading a chunk store container bitmap file. |
Reading a chunk store container bitmap file. |
2145 | Writing a chunk store container bitmap file. |
Writing a chunk store container bitmap file. |
2146 | Deleting a chunk store container bitmap file. |
Deleting a chunk store container bitmap file. |
2147 | Starting chunk store garbage collection. |
Starting chunk store garbage collection. |
2148 | Indexing active chunk references. |
Indexing active chunk references. |
2149 | Processing deleted chunk store streams. |
Processing deleted chunk store streams. |
2150 | Identifying unreferenced chunks. |
Identifying unreferenced chunks. |
2151 | Enumerating the chunk store. |
Enumerating the chunk store. |
2152 | Initializing the chunk store enumerator. |
Initializing the chunk store enumerator. |
2153 | Initializing the stream map parser. |
Initializing the stream map parser. |
2154 | Iterating the stream map. |
Iterating the stream map. |
2155 | Initializing chunk store compaction. |
Initializing chunk store compaction. |
2156 | Compacting chunk store containers. |
Compacting chunk store containers. |
2157 | Initializing stream map compaction reconciliation. |
Initializing stream map compaction reconciliation. |
2158 | Reconciling stream maps due to data compaction. |
Reconciling stream maps due to data compaction. |
2159 | Initializing chunk store reconciliation. |
Initializing chunk store reconciliation. |
2160 | Reconciling duplicate chunks in the chunk store. |
Reconciling duplicate chunks in the chunk store. |
2161 | Initializing the deduplication garbage collection job. |
Initializing the deduplication garbage collection job. |
2162 | Running the deduplication garbage collection job. |
Running the deduplication garbage collection job. |
2163 | Canceling the deduplication garbage collection job. |
Canceling the deduplication garbage collection job. |
2164 | Waiting for the deduplication garbage collection job to complete. |
Waiting for the deduplication garbage collection job to complete. |
2165 | Initializing the deduplication job. |
Initializing the deduplication job. |
2166 | Running the deduplication job. |
Running the deduplication job. |
2167 | Canceling the deduplication job. |
Canceling the deduplication job. |
2168 | Waiting for the deduplication to complete. |
Waiting for the deduplication to complete. |
2169 | Initializing the deduplication scrubbing job. |
Initializing the deduplication scrubbing job. |
2170 | Running the deduplication scrubbing job. |
Running the deduplication scrubbing job. |
2171 | Canceling the deduplication scrubbing job. |
Canceling the deduplication scrubbing job. |
2172 | Waiting for the deduplication scrubbing job to complete. |
Waiting for the deduplication scrubbing job to complete. |
2173 | Opening a corruption log file. |
Opening a corruption log file. |
2174 | Reading a corruption log file. |
Reading a corruption log file. |
2175 | Writing an entry to a corruption log file. |
Writing an entry to a corruption log file. |
2176 | Enumerating corruption log files. |
Enumerating corruption log files. |
2206 | Creating a chunk store chunk sequence. |
Creating a chunk store chunk sequence. |
2207 | Adding a chunk to a chunk store sequence. |
Adding a chunk to a chunk store sequence. |
2208 | Completing creation of a chunk store sequence. |
Completing creation of a chunk store sequence. |
2209 | Reading a chunk store sequence. |
Reading a chunk store sequence. |
2210 | Continuing a chunk store sequence. |
Continuing a chunk store sequence. |
2211 | Aborting a chunk store sequence. |
Aborting a chunk store sequence. |
2212 | Initializing the deduplication analysis job. |
Initializing the deduplication analysis job. |
2213 | Running the deduplication analysis job. |
Running the deduplication analysis job. |
2214 | Canceling the deduplication analysis job. |
Canceling the deduplication analysis job. |
2215 | Waiting for the deduplication analysis job to complete. |
Waiting for the deduplication analysis job to complete. |
2216 | Repair chunk store container header. |
Repair chunk store container header. |
2217 | Repair chunk store container redirection table. |
Repair chunk store container redirection table. |
2218 | Repair chunk store chunk. |
Repair chunk store chunk. |
2219 | Clone chunk store container. |
Clone chunk store container. |
2220 | Scrubbing chunk store. |
Scrubbing chunk store. |
2221 | Detecting corruption store corruptions. |
Detecting corruption store corruptions. |
2222 | Loading the deduplication corruption logs. |
Loading the deduplication corruption logs. |
2223 | Cleaning up the deduplication corruption logs. |
Cleaning up the deduplication corruption logs. |
2224 | Determining the set of user files affected by chunk store corruptions. |
Determining the set of user files affected by chunk store corruptions. |
2225 | Reporting corruptions. |
Reporting corruptions. |
2226 | Estimating memory requirement for the deduplication scrubbing job. |
Estimating memory requirement for the deduplication scrubbing job. |
2227 | Deep garbage collection initialization has started. |
Deep garbage collection initialization has started. |
2228 | Starting deep garbage collection on stream map containers. |
Starting deep garbage collection on stream map containers. |
2229 | Starting deep garbage collection on data containers. |
Starting deep garbage collection on data containers. |
2230 | Initialize bitmaps on containers |
Initialize bitmaps on containers |
2231 | Scanning the reparse point index to determine which stream map is being referenced. |
Scanning the reparse point index to determine which stream map is being referenced. |
2232 | Saving deletion bitmap. |
Saving deletion bitmap. |
2233 | Scan the stream map containers to mark referenced chunks. |
Scan the stream map containers to mark referenced chunks. |
2234 | Convert bitmap to chunk delete log |
Convert bitmap to chunk delete log |
2235 | Compact Data Containers |
Compact Data Containers |
2236 | Compact Stream Map Containers |
Compact Stream Map Containers |
2237 | Change a chunk store container generation. |
Change a chunk store container generation. |
2238 | Start change logging. |
Start change logging. |
2239 | Stop change logging. |
Stop change logging. |
2240 | Add a merged target chunk store container. |
Add a merged target chunk store container. |
2241 | Processing tentatively deleted chunks. |
Processing tentatively deleted chunks. |
2242 | Check version of chunk store. |
Check version of chunk store. |
2243 | Initializing the corruption table. |
Initializing the corruption table. |
2244 | Writing out the corruption table. |
Writing out the corruption table. |
2245 | Deleting the corruption table file. |
Deleting the corruption table file. |
2246 | Repairing corruptions. |
Repairing corruptions. |
2247 | Updating corruption table with new logs. |
Updating corruption table with new logs. |
2248 | Destroying chunk store. |
Destroying chunk store. |
2249 | Marking chunk store as deleted. |
Marking chunk store as deleted. |
2250 | Inserting corruption entry into table. |
Inserting corruption entry into table. |
2251 | Checking chunk store consistency. |
Checking chunk store consistency. |
2252 | Updating a chunk store file list. |
Updating a chunk store file list. |
2253 | Recovering a chunk store file list from redundancy. |
Recovering a chunk store file list from redundancy. |
2254 | Adding an entry to a chunk store file list. |
Adding an entry to a chunk store file list. |
2255 | Replacing an entry in a chunk store file list. |
Replacing an entry in a chunk store file list. |
2256 | Deleting an entry in a chunk store file list. |
Deleting an entry in a chunk store file list. |
2257 | Reading a chunk store file list. |
Reading a chunk store file list. |
2258 | Reading a chunk store container directory file. |
Reading a chunk store container directory file. |
2259 | Writing a chunk store container directory file. |
Writing a chunk store container directory file. |
2260 | Deleting a chunk store container directory file. |
Deleting a chunk store container directory file. |
2261 | Setting FileSystem allocation for chunk store container file. |
Setting FileSystem allocation for chunk store container file. |
2262 | Initializing the deduplication unoptimization job. |
Initializing the deduplication unoptimization job. |
2263 | Running the deduplication unoptimization job. |
Running the deduplication unoptimization job. |
2264 | Restoring dedup file |
Restoring dedup file |
2265 | Reading dedup information |
Reading dedup information |
2266 | Building container list |
Building container list |
2267 | Building read plan |
Building read plan |
2268 | Executing read plan |
Executing read plan |
2269 | Running deep scrubbing |
Running deep scrubbing |
2270 | Scanning reparse point index during deep scrub |
Scanning reparse point index during deep scrub |
2271 | Logging reparse point during deep scrub |
Logging reparse point during deep scrub |
2272 | Scanning stream map containers during deep scrub |
Scanning stream map containers during deep scrub |
2273 | Scrubbing a stream map container |
Scrubbing a stream map container |
2274 | Logging a stream map's entries during deep scrub |
Logging a stream map's entries during deep scrub |
2275 | Reading a container's redirection table during deep scrub |
Reading a container's redirection table during deep scrub |
2276 | Scanning data containers during deep scrub |
Scanning data containers during deep scrub |
2277 | Scrubbing a data container |
Scrubbing a data container |
2278 | Scrubbing a data chunk |
Scrubbing a data chunk |
2279 | Verifying SM entry to DC hash link |
Verifying SM entry to DC hash link |
2280 | Logging a record during deep scrub |
Logging a record during deep scrub |
2281 | Writing a batch of log records during deep scrub |
Writing a batch of log records during deep scrub |
2282 | Finalizing a deep scrub temporary log |
Finalizing a deep scrub temporary log |
2283 | Deep scrubbing log manager log record |
Deep scrubbing log manager log record |
2284 | Finalizing deep scrub log manager |
Finalizing deep scrub log manager |
2285 | Initializing deep scrub chunk index table |
Initializing deep scrub chunk index table |
2286 | Inserting a chunk into deep scrub chunk index table |
Inserting a chunk into deep scrub chunk index table |
2287 | Looking up a chunk from deep scrub chunk index table |
Looking up a chunk from deep scrub chunk index table |
2288 | Rebuilding a chunk index table during deep scrub |
Rebuilding a chunk index table during deep scrub |
2289 | Resetting the deep scrubbing logger cache |
Resetting the deep scrubbing logger cache |
2290 | Resetting the deep scrubbing log manager |
Resetting the deep scrubbing log manager |
2291 | Scanning hotspot containers during deep scrub |
Scanning hotspot containers during deep scrub |
2292 | Scrubbing a hotspot container |
Scrubbing a hotspot container |
2293 | Scrubbing the hotspot table |
Scrubbing the hotspot table |
2294 | Cleaning up the deduplication deep scrub corruption logs |
Cleaning up the deduplication deep scrub corruption logs |
2295 | Computing deduplication file metadata |
Computing deduplication file metadata |
2296 | Scanning recall bitmap during deep scrub |
Scanning recall bitmap during deep scrub |
2297 | Loading a heat map for a user file |
Loading a heat map for a user file |
2298 | Saving a heat map for a user file |
Saving a heat map for a user file |
2299 | Inserting a hot chunk to a chunk stream. |
Inserting a hot chunk to a chunk stream. |
2300 | Deleting a heat map for a user file |
Deleting a heat map for a user file |
2301 | Creating shadow copy set. |
Creating shadow copy set. |
2302 | Initializing scan for optimization. |
Initializing scan for optimization. |
2303 | Scanning the NTFS USN journal |
Scanning the NTFS USN journal |
2304 | Initializing the USN scanner |
Initializing the USN scanner |
2305 | Start a new data chunkstore session |
Start a new data chunkstore session |
2306 | commit a data chunkstore session |
commit a data chunkstore session |
2307 | Initializing the deduplication data port job. |
Initializing the deduplication data port job. |
2308 | Running the deduplication data port job. |
Running the deduplication data port job. |
2309 | Canceling the deduplication data port job. |
Canceling the deduplication data port job. |
2310 | Waiting for the deduplication data port job to complete. |
Waiting for the deduplication data port job to complete. |
2311 | Lookup chunks request. |
Lookup chunks request. |
2312 | Insert chunks request. |
Insert chunks request. |
2313 | Commit stream maps request. |
Commit stream maps request. |
2314 | Get streams request. |
Get streams request. |
2315 | Get chunks request. |
Get chunks request. |
2401 | Initializing workload manager. |
Initializing workload manager. |
2402 | Canceling a job. |
Canceling a job. |
2403 | Enqueue a job. |
Enqueue a job. |
2404 | Initialize job manifest. |
Initialize job manifest. |
2405 | Launch a job host process. |
Launch a job host process. |
2406 | Validate a job host process. |
Validate a job host process. |
2407 | Initializing a job. |
Initializing a job. |
2408 | Terminate a job host process. |
Terminate a job host process. |
2409 | Uninitializing workload manager. |
Uninitializing workload manager. |
2410 | Handshaking with a job. |
Handshaking with a job. |
2411 | Job completion callback. |
Job completion callback. |
2412 | Running a job. |
Running a job. |
2413 | Checking ownership of Csv volume. |
Checking ownership of Csv volume. |
2414 | Adding Csv volume for monitoring. |
Adding Csv volume for monitoring. |
5001 | TRUE |
TRUE |
5002 | FALSE |
FALSE |
5003 | |
|
5005 | Unknown error |
Unknown error |
5101 | Data Deduplication Service |
Data Deduplication Service |
5102 | The Data Deduplication service enables the deduplication and compression of data on selected volumes in order to optimize disk space used. If this service is stopped, optimization will no longer occur but access to already optimized data will continue to function. |
The Data Deduplication service enables the deduplication and compression of data on selected volumes in order to optimize disk space used. If this service is stopped, optimization will no longer occur but access to already optimized data will continue to function. |
5105 | Dedup |
Dedup |
5106 | The Data Deduplication filter driver enables read/write I/O to deduplicated files. |
The Data Deduplication filter driver enables read/write I/O to deduplicated files. |
5201 | The chunk store on volume %s, Select this if you are using optimized backup. |
The chunk store on volume %s, Select this if you are using optimized backup. |
5202 | Data deduplication configuration on volume %s |
Data deduplication configuration on volume %s |
5203 | Data Deduplication Volume Shadow Copy Service |
Data Deduplication Volume Shadow Copy Service |
5204 | Data Deduplication VSS writer guided backup applications to back up volumes with deduplication. |
Data Deduplication VSS writer guided backup applications to back up volumes with deduplication. |
5205 | Data deduplication state on volume %s |
Data deduplication state on volume %s |
5301 | Data deduplication optimization |
Data deduplication optimization |
5302 | Data deduplication garbage collection |
Data deduplication garbage collection |
5303 | Data deduplication scrubbing |
Data deduplication scrubbing |
5304 | Data deduplication unoptimization |
Data deduplication unoptimization |
5305 | Queued |
Queued |
5306 | Initializing |
Initializing |
5307 | Running |
Running |
5308 | Completed |
Completed |
5309 | Pending Cancel |
Pending Cancel |
5310 | Canceled |
Canceled |
5311 | Failed |
Failed |
5312 | Data deduplication scrubbing job should be run on this volume. |
Data deduplication scrubbing job should be run on this volume. |
5313 | An unsupported path was detected and will be skipped. |
An unsupported path was detected and will be skipped. |
5314 | Data deduplication dataport |
Data deduplication dataport |
5401 | This task runs the data deduplication optimization job on all enabled volumes. |
This task runs the data deduplication optimization job on all enabled volumes. |
5402 | This task runs the data deduplication garbage collection job on all enabled volumes. |
This task runs the data deduplication garbage collection job on all enabled volumes. |
5403 | This task runs the data deduplication scrubbing job on all enabled volumes. |
This task runs the data deduplication scrubbing job on all enabled volumes. |
5404 | This task runs the data deduplication unoptimization job on all enabled volumes. |
This task runs the data deduplication unoptimization job on all enabled volumes. |
5405 | This task runs the data deduplication data port job on all enabled volumes. |
This task runs the data deduplication data port job on all enabled volumes. |
0x00565301 | Reconciliation of chunk store is due. |
Reconciliation of chunk store is due. |
0x00565302 | There are no actions associated with this job. |
There are no actions associated with this job. |
0x00565303 | Data deduplication cannot runing this job on this Csv volume on this node. |
Data deduplication cannot runing this job on this Csv volume on this node. |
0x00565304 | Data deduplication cannot runing this cmdlet on this Csv volume on this node. |
Data deduplication cannot runing this cmdlet on this Csv volume on this node. |
0x10000001 | Reporting |
Reporting |
0x10000002 | Filter |
Filter |
0x10000003 | Kernel mode stream store |
Kernel mode stream store |
0x10000004 | Kernel mode chunk store |
Kernel mode chunk store |
0x10000005 | Kernel mode chunk container |
Kernel mode chunk container |
0x10000006 | Kernel mode file cache |
Kernel mode file cache |
0x30000000 | Info |
Info |
0x30000001 | Start |
Start |
0x30000002 | Stop |
Stop |
0x50000003 | Warning |
Warning |
0x50000004 | Information |
Information |
0x70000001 | Data Deduplication Optimization Task |
Data Deduplication Optimization Task |
0x70000002 | Data Deduplication Garbage Collection Task |
Data Deduplication Garbage Collection Task |
0x70000003 | Data Deduplication Scrubbing Task |
Data Deduplication Scrubbing Task |
0x70000004 | Data Deduplication Unoptimization Task |
Data Deduplication Unoptimization Task |
0x70000005 | Open stream store stream |
Open stream store stream |
0x70000006 | Prepare for paging IO |
Prepare for paging IO |
0x70000007 | Read stream map |
Read stream map |
0x70000008 | Read chunks |
Read chunks |
0x70000009 | Compute checksum |
Compute checksum |
0x7000000A | Get container entry |
Get container entry |
0x7000000B | Get maximum generation for container |
Get maximum generation for container |
0x7000000C | Open chunk container |
Open chunk container |
0x7000000D | Initialize chunk container redirection table |
Initialize chunk container redirection table |
0x7000000E | Validate chunk container redirection table |
Validate chunk container redirection table |
0x7000000F | Get chunk container valid data length |
Get chunk container valid data length |
0x70000010 | Get offset from chunk container redirection table |
Get offset from chunk container redirection table |
0x70000011 | Read chunk container block |
Read chunk container block |
0x70000012 | Clear chunk container block |
Clear chunk container block |
0x70000013 | Copy chunk |
Copy chunk |
0x70000014 | Initialize file cache |
Initialize file cache |
0x70000015 | Map file cache data |
Map file cache data |
0x70000016 | Unpin file cache data |
Unpin file cache data |
0x70000017 | Copy file cache data |
Copy file cache data |
0x70000018 | Read underlying file cache data |
Read underlying file cache data |
0x70000019 | Get chunk container file size |
Get chunk container file size |
0x7000001A | Pin stream map |
Pin stream map |
0x7000001B | Pin chunk container |
Pin chunk container |
0x7000001C | Pin chunk |
Pin chunk |
0x7000001D | Allocate pool buffer |
Allocate pool buffer |
0x7000001E | Unpin chunk container |
Unpin chunk container |
0x7000001F | Unpin chunk |
Unpin chunk |
0x70000020 | Dedup read processing |
Dedup read processing |
0x70000021 | Get first stream map entry |
Get first stream map entry |
0x70000022 | Read chunk metadata |
Read chunk metadata |
0x70000023 | Read chunk data |
Read chunk data |
0x70000024 | Reference TlCache data |
Reference TlCache data |
0x70000025 | Read chunk data from stream store |
Read chunk data from stream store |
0x70000026 | Assemble chunk data |
Assemble chunk data |
0x70000027 | Decompress chunk data |
Decompress chunk data |
0x70000028 | Copy chunk data in to user buffer |
Copy chunk data in to user buffer |
0x70000029 | Insert chunk data in to tlcache |
Insert chunk data in to tlcache |
0x7000002A | Read data from dedup reparse point file |
Read data from dedup reparse point file |
0x7000002B | Prepare stream map |
Prepare stream map |
0x7000002C | Patch clean ranges |
Patch clean ranges |
0x7000002D | Writing data to dedup file |
Writing data to dedup file |
0x7000002E | Queue write request on dedup file |
Queue write request on dedup file |
0x7000002F | Do copy on write work on dedup file |
Do copy on write work on dedup file |
0x70000030 | Do full recall on dedup file |
Do full recall on dedup file |
0x70000031 | Do partial recall on dedup file |
Do partial recall on dedup file |
0x70000032 | Do dummy paging read on dedup file |
Do dummy paging read on dedup file |
0x70000033 | Read clean data for recalling file |
Read clean data for recalling file |
0x70000034 | Write clean data to dedup file normally |
Write clean data to dedup file normally |
0x70000035 | Write clean data to dedup file paged |
Write clean data to dedup file paged |
0x70000036 | Recall dedup file using paging Io |
Recall dedup file using paging Io |
0x70000037 | Flush dedup file after recall |
Flush dedup file after recall |
0x70000038 | Update bitmap after recall on dedup file |
Update bitmap after recall on dedup file |
0x70000039 | Delete dedup reparse point |
Delete dedup reparse point |
0x7000003A | Open dedup file |
Open dedup file |
0x7000003B | Locking user buffer for read |
Locking user buffer for read |
0x7000003C | Get system address for MDL |
Get system address for MDL |
0x7000003D | Read clean dedup file |
Read clean dedup file |
0x7000003E | Get range state |
Get range state |
0x7000003F | Get chunk body |
Get chunk body |
0x70000040 | Release chunk |
Release chunk |
0x70000041 | Release decompress chunk context |
Release decompress chunk context |
0x70000042 | Prepare decompress chunk context |
Prepare decompress chunk context |
0x70000043 | Copy data to compressed buffer |
Copy data to compressed buffer |
0x70000044 | Release data from TL Cache |
Release data from TL Cache |
0x70000045 | Queue async read request |
Queue async read request |
0x80565301 | The requested object was not found. |
The requested object was not found. |
0x80565302 | One (or more) of the arguments given to the task scheduler is not valid. |
One (or more) of the arguments given to the task scheduler is not valid. |
0x80565303 | The specified object already exists. |
The specified object already exists. |
0x80565304 | The specified path was not found. |
The specified path was not found. |
0x80565305 | The specified user is invalid. |
The specified user is invalid. |
0x80565306 | The specified path is invalid. |
The specified path is invalid. |
0x80565307 | The specified name is invalid. |
The specified name is invalid. |
0x80565308 | The specified property is out of range. |
The specified property is out of range. |
0x80565309 | A required filter driver is either not installed, not loaded, or not ready for service. |
A required filter driver is either not installed, not loaded, or not ready for service. |
0x8056530A | There is insufficient disk space to perform the requested operation. |
There is insufficient disk space to perform the requested operation. |
0x8056530B | The specified volume type is not supported. Deduplication is supported on fixed, write-enabled NTFS data volumes and CSV backed by NTFS data volumes. |
The specified volume type is not supported. Deduplication is supported on fixed, write-enabled NTFS data volumes and CSV backed by NTFS data volumes. |
0x8056530C | Data deduplication encountered an unexpected error. Check the Data Deduplication Operational event log for more information. |
Data deduplication encountered an unexpected error. Check the Data Deduplication Operational event log for more information. |
0x8056530D | The specified scan log cursor has expired. |
The specified scan log cursor has expired. |
0x8056530E | The file system might be corrupted. Please run the CHKDSK utility. |
The file system might be corrupted. Please run the CHKDSK utility. |
0x8056530F | A volume shadow copy could not be created or was unexpectedly deleted. |
A volume shadow copy could not be created or was unexpectedly deleted. |
0x80565310 | Data deduplication encountered a corrupted XML configuration file. |
Data deduplication encountered a corrupted XML configuration file. |
0x80565311 | The Data Deduplication service could not access the global configuration because the Cluster service is not running. |
The Data Deduplication service could not access the global configuration because the Cluster service is not running. |
0x80565312 | The Data Deduplication service could not access the global configuration because it has not been installed yet. |
The Data Deduplication service could not access the global configuration because it has not been installed yet. |
0x80565313 | Data deduplication failed to access the volume. It may be offline. |
Data deduplication failed to access the volume. It may be offline. |
0x80565314 | The module encountered an invalid parameter or a valid parameter with an invalid value, or an expected module parameter was not found. Check the operational event log for more information. |
The module encountered an invalid parameter or a valid parameter with an invalid value, or an expected module parameter was not found. Check the operational event log for more information. |
0x80565315 | An attempt was made to perform an initialization operation when initialization has already been completed. |
An attempt was made to perform an initialization operation when initialization has already been completed. |
0x80565316 | An attempt was made to perform an uninitialization operation when that operation has already been completed. |
An attempt was made to perform an uninitialization operation when that operation has already been completed. |
0x80565317 | The Data Deduplication service detected an internal folder that is not secure. To secure the folder, reinstall deduplication on the volume. |
The Data Deduplication service detected an internal folder that is not secure. To secure the folder, reinstall deduplication on the volume. |
0x80565318 | Data chunking has already been initiated. |
Data chunking has already been initiated. |
0x80565319 | An attempt was made to perform an operation from an invalid state. |
An attempt was made to perform an operation from an invalid state. |
0x8056531A | An attempt was made to perform an operation before initialization. |
An attempt was made to perform an operation before initialization. |
0x8056531B | Call ::PushBuffer to continue chunking or ::Drain to enumerate any partial chunks. |
Call ::PushBuffer to continue chunking or ::Drain to enumerate any partial chunks. |
0x8056531C | The Data Deduplication service detected multiple chunk store folders; however, only one chunk store folder is permitted. To fix this issue, reinstall deduplication on the volume. |
The Data Deduplication service detected multiple chunk store folders; however, only one chunk store folder is permitted. To fix this issue, reinstall deduplication on the volume. |
0x8056531D | The data is invalid. |
The data is invalid. |
0x8056531E | The process is in an unknown state. |
The process is in an unknown state. |
0x8056531F | The process is not running. |
The process is not running. |
0x80565320 | There was an error while opening the file. |
There was an error while opening the file. |
0x80565321 | The job process could not start because the job was not found. |
The job process could not start because the job was not found. |
0x80565322 | The client process ID does not match the ID of the host process that was started. |
The client process ID does not match the ID of the host process that was started. |
0x80565323 | The specified volume is not enabled for deduplication. |
The specified volume is not enabled for deduplication. |
0x80565324 | A zero-character chunk ID is not valid. |
A zero-character chunk ID is not valid. |
0x80565325 | The index is filled to capacity. |
The index is filled to capacity. |
0x80565327 | Session already exists. |
Session already exists. |
0x80565328 | The compression format selected is not supported. |
The compression format selected is not supported. |
0x80565329 | The compressed buffer is larger than the uncompressed buffer. |
The compressed buffer is larger than the uncompressed buffer. |
0x80565330 | The buffer is not large enough. |
The buffer is not large enough. |
0x8056533A | Index Scratch Log Error in: Seek, Read, Write, or Create |
Index Scratch Log Error in: Seek, Read, Write, or Create |
0x8056533B | The job type is invalid. |
The job type is invalid. |
0x8056533C | Persistence layer enumeration error. |
Persistence layer enumeration error. |
0x8056533D | The operation was cancelled. |
The operation was cancelled. |
0x8056533E | This job will not run at the scheduled time because it requires more memory than is currently available. |
This job will not run at the scheduled time because it requires more memory than is currently available. |
0x80565341 | The job was terminated while in a cancel or pending state. |
The job was terminated while in a cancel or pending state. |
0x80565342 | The job was terminated while in a handshake pending state. |
The job was terminated while in a handshake pending state. |
0x80565343 | The job was terminated due to a service shutdown. |
The job was terminated due to a service shutdown. |
0x80565344 | The job was abandoned before starting. |
The job was abandoned before starting. |
0x80565345 | The job process exited unexpectedly. |
The job process exited unexpectedly. |
0x80565346 | The Data Deduplication service detected that the container cannot be compacted or updated because it has reached the maximum generation version. |
The Data Deduplication service detected that the container cannot be compacted or updated because it has reached the maximum generation version. |
0x80565347 | The corruption log has reached its maximum size. |
The corruption log has reached its maximum size. |
0x80565348 | The data deduplication scrubbing job failed to process the corruption logs. |
The data deduplication scrubbing job failed to process the corruption logs. |
0x80565349 | Data deduplication failed to create new chunk store container files. Allocate more space to the volume. |
Data deduplication failed to create new chunk store container files. Allocate more space to the volume. |
0x80565350 | An error occurred while opening the file because the file was in use. |
An error occurred while opening the file because the file was in use. |
0x80565351 | An error was discovered while deduplicating the file. The file is now skipped. |
An error was discovered while deduplicating the file. The file is now skipped. |
0x80565352 | File Server Deduplication encountered corruption while enumerating chunks in a chunk store. |
File Server Deduplication encountered corruption while enumerating chunks in a chunk store. |
0x80565353 | The scan log is not valid. |
The scan log is not valid. |
0x80565354 | The data is invalid due to checksum (CRC) mismatch error. |
The data is invalid due to checksum (CRC) mismatch error. |
0x80565355 | Data deduplication encountered file corruption error. |
Data deduplication encountered file corruption error. |
0x80565356 | Job completed with some errors. Check event logs for more details. |
Job completed with some errors. Check event logs for more details. |
0x80565357 | Data deduplication is not supported on the version of the chunk store found on this volume. |
Data deduplication is not supported on the version of the chunk store found on this volume. |
0x80565358 | Data deduplication encountered an unknown version of chunk store on this volume. |
Data deduplication encountered an unknown version of chunk store on this volume. |
0x80565359 | The job was assigned less memory than the minimum it needs to run. |
The job was assigned less memory than the minimum it needs to run. |
0x8056535A | The data deduplication job schedule cannot be modified. |
The data deduplication job schedule cannot be modified. |
0x8056535B | The valid data length of chunk store container is misaligned. |
The valid data length of chunk store container is misaligned. |
0x8056535C | File access is denied. |
File access is denied. |
0x8056535D | Data deduplication job stopped due to too many corrupted files. |
Data deduplication job stopped due to too many corrupted files. |
0x8056535E | Data deduplication job stopped due to an internal error in the BCrypt SHA-512 provider. |
Data deduplication job stopped due to an internal error in the BCrypt SHA-512 provider. |
0x8056535F | Data deduplication job stopped for store reconciliation. |
Data deduplication job stopped for store reconciliation. |
0x80565360 | File skipped for deduplication due to its size. |
File skipped for deduplication due to its size. |
0x80565361 | File skipped due to deduplication retry limit. |
File skipped due to deduplication retry limit. |
0x80565362 | The pipeline buffer cache is full. |
The pipeline buffer cache is full. |
0x80565363 | Another Data deduplication job already running on this volume. |
Another Data deduplication job already running on this volume. |
0x80565364 | Data deduplication cannot run this job on this Csv volume on this node. Try running the job on the Csv volume resource owner node. |
Data deduplication cannot run this job on this Csv volume on this node. Try running the job on the Csv volume resource owner node. |
0x80565365 | Data deduplication failed to initialize cluster state on this node. |
Data deduplication failed to initialize cluster state on this node. |
0x80565366 | Optimization of the range was aborted by the dedup filter driver. |
Optimization of the range was aborted by the dedup filter driver. |
0x80565367 | The operation could not be performed because of a concurrent IO operation. |
The operation could not be performed because of a concurrent IO operation. |
0x80565368 | Data deduplication encountered an unexpected error. Verify deduplication is enabled on all nodes if in a cluster configuration. Check the Data Deduplication Operational event log for more information. |
Data deduplication encountered an unexpected error. Verify deduplication is enabled on all nodes if in a cluster configuration. Check the Data Deduplication Operational event log for more information. |
0x80565369 | Data access for data deduplicated CSV volumes can only be disabled when in maintenance mode. Check the Data Deduplication Operational event log for more information. |
Data access for data deduplicated CSV volumes can only be disabled when in maintenance mode. Check the Data Deduplication Operational event log for more information. |
0x8056536A | Data Deduplication encountered an IO device error that may indicate a hardware fault in the storage subsystem. |
Data Deduplication encountered an IO device error that may indicate a hardware fault in the storage subsystem. |
0x8056536B | Data deduplication cannot run this cmdlet on this Csv volume on this node. Try running the cmdlet on the Csv volume resource owner node. |
Data deduplication cannot run this cmdlet on this Csv volume on this node. Try running the cmdlet on the Csv volume resource owner node. |
0x8056536C | Deduplication job not supported during rolling cluster upgrade. |
Deduplication job not supported during rolling cluster upgrade. |
0x8056536D | Deduplication setting not supported during rolling cluster upgrade. |
Deduplication setting not supported during rolling cluster upgrade. |
0x8056536E | Data port job is not ready to accept requests. |
Data port job is not ready to accept requests. |
0x8056536F | Data port request not accepted due to request count/size limit exceeded. |
Data port request not accepted due to request count/size limit exceeded. |
0x80565370 | Data port request completed with some errors. Check event logs for more details. |
Data port request completed with some errors. Check event logs for more details. |
0x80565371 | Data port request failed. Check event logs for more details. |
Data port request failed. Check event logs for more details. |
0x80565372 | Data port error accessing the hash index. Check event logs for more details. |
Data port error accessing the hash index. Check event logs for more details. |
0x80565373 | Data port error accessing the stream store. Check event logs for more details. |
Data port error accessing the stream store. Check event logs for more details. |
0x80565374 | Data port file stub error. Check event logs for more details. |
Data port file stub error. Check event logs for more details. |
0x80565375 | Data port encountered a deduplication filter error. Check event logs for more details. |
Data port encountered a deduplication filter error. Check event logs for more details. |
0x80565376 | Data port cannot commit stream map due to missing chunk. Check event logs for more details. |
Data port cannot commit stream map due to missing chunk. Check event logs for more details. |
0x80565377 | Data port cannot commit stream map due to invalid stream map metadata. Check event logs for more details. |
Data port cannot commit stream map due to invalid stream map metadata. Check event logs for more details. |
0x80565378 | Data port cannot commit stream map due to invalid stream map entry. Check event logs for more details. |
Data port cannot commit stream map due to invalid stream map entry. Check event logs for more details. |
0x80565379 | Data port cannot retrieve job interface for volume. Check event logs for more details. |
Data port cannot retrieve job interface for volume. Check event logs for more details. |
0x8056537A | The specified path is not supported. |
The specified path is not supported. |
0x8056537B | // Data port cannot decompress chunk. Check event logs for more details. |
// Data port cannot decompress chunk. Check event logs for more details. |
0x8056537C | Data port cannot calculate chunk hash. Check event logs for more details. |
Data port cannot calculate chunk hash. Check event logs for more details. |
0x8056537D | Data port cannot read chunk stream. Check event logs for more details. |
Data port cannot read chunk stream. Check event logs for more details. |
0x8056537E | The target file is not a deduplicated file. Check event logs for more details. |
The target file is not a deduplicated file. Check event logs for more details. |
0x8056537F | The target file is partially recalled. Check event logs for more details. |
The target file is partially recalled. Check event logs for more details. |
0x90000001 | Data Deduplication |
Data Deduplication |
0x90000002 | Application |
Application |
0x91000001 | Data Deduplication Change Events |
Data Deduplication Change Events |
0xB0001000 | Volume \"%1\" appears as disconnected and it is ignored by the service. You may want to rescan disks. Error: %2.%n%3 |
Volume \"%1\" appears as disconnected and it is ignored by the service. You may want to rescan disks. Error: %2.%n%3 |
0xB0001001 | The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\". Most likely the CPU is under heavy load. Error: %4.%n%5 |
The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\". Most likely the CPU is under heavy load. Error: %4.%n%5 |
0xB0001002 | The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\". Error: %4.%n%5 |
The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\". Error: %4.%n%5 |
0xB0001003 | The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\" during Safe Mode. The Data Deduplication service cannot start while in safe mode. Error: %4.%n%5 |
The COM Server with CLSID %1 and name \"%2\" cannot be started on machine \"%3\" during Safe Mode. The Data Deduplication service cannot start while in safe mode. Error: %4.%n%5 |
0xB0001004 | A critical component required by Data Deduplication is not registered. This might happen if an error occurred during Windows setup, or if the computer does not have the Windows Server 2012 or later version of Deduplication service installed. The error returned from CoCreateInstance on class with CLSID %1 and Name \"%2\" on machine \"%3\" is %4.%n%5 |
A critical component required by Data Deduplication is not registered. This might happen if an error occurred during Windows setup, or if the computer does not have the Windows Server 2012 or later version of Deduplication service installed. The error returned from CoCreateInstance on class with CLSID %1 and Name \"%2\" on machine \"%3\" is %4.%n%5 |
0xB0001005 | Data Deduplication service is shutting down due to idle timeout.%n%1 |
Data Deduplication service is shutting down due to idle timeout.%n%1 |
0xB0001006 | Data Deduplication service is shutting down due to shutdown event from the Service Control Manager.%n%1 |
Data Deduplication service is shutting down due to shutdown event from the Service Control Manager.%n%1 |
0xB0001007 | Data Deduplication job of type \"%1\" on volume \"%2\" has completed with return code: %3%n%4 |
Data Deduplication job of type \"%1\" on volume \"%2\" has completed with return code: %3%n%4 |
0xB0001008 | Data Deduplication error: Unexpected error calling routine %1. hr = %2.%n%3 |
Data Deduplication error: Unexpected error calling routine %1. hr = %2.%n%3 |
0xB0001009 | Data Deduplication error: Unexpected error.%n%1 |
Data Deduplication error: Unexpected error.%n%1 |
0xB000100A | Data Deduplication warning: %1%nError: %2.%n%3 |
Data Deduplication warning: %1%nError: %2.%n%3 |
0xB000100B | Data Deduplication error: Unexpected COM error %1: %2. Error code: %3.%n%4 |
Data Deduplication error: Unexpected COM error %1: %2. Error code: %3.%n%4 |
0xB000100C | Data Deduplication was unable to access the following file or volume: \"%1\". This file or volume might be locked by another application right now, or you might need to give Local System access to it.%n%2 |
Data Deduplication was unable to access the following file or volume: \"%1\". This file or volume might be locked by another application right now, or you might need to give Local System access to it.%n%2 |
0xB000100D | Data Deduplication encountered an unexpected error during volume scan of volumes mounted at \"%1\" (\"%2\"). To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3 |
Data Deduplication encountered an unexpected error during volume scan of volumes mounted at \"%1\" (\"%2\"). To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3 |
0xB000100E | Data Deduplication was unable to create or access the shadow copy for volumes mounted at \"%1\" (\"%2\"). Possible causes include an improper Shadow Copy configuration, insufficient disk space, or extreme memory, I/O or CPU load of the system. To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3 |
Data Deduplication was unable to create or access the shadow copy for volumes mounted at \"%1\" (\"%2\"). Possible causes include an improper Shadow Copy configuration, insufficient disk space, or extreme memory, I/O or CPU load of the system. To find out more information about the root cause for this error please consult the Application/System event log for other Deduplication service, VSS or VOLSNAP errors related with these volumes. Also, you might want to make sure that you can create shadow copies on these volumes by using the VSSADMIN command like this: VSSADMIN CREATE SHADOW /For=C:%n%3 |
0xB000100F | Data Deduplication was unable to access volumes mounted at \"%1\" (\"%2\"). Make sure that dismount or format operations do not happen while running deduplication.%n%3 |
Data Deduplication was unable to access volumes mounted at \"%1\" (\"%2\"). Make sure that dismount or format operations do not happen while running deduplication.%n%3 |
0xB0001010 | Data Deduplication was unable to access a file or volume. Details:%n%n%1%n The volume may be inaccessible for I/O operations or marked read-only. In case of a cluster volume, this may be a transient failure during failover.%n%2 |
Data Deduplication was unable to access a file or volume. Details:%n%n%1%n The volume may be inaccessible for I/O operations or marked read-only. In case of a cluster volume, this may be a transient failure during failover.%n%2 |
0xB0001011 | Data Deduplication was unable to scan volume \"%1\" (\"%2\").%n%3 |
Data Deduplication was unable to scan volume \"%1\" (\"%2\").%n%3 |
0xB0001012 | Data Deduplication detected a corruption on file \"%1\" at offset (\"%2\"). If this condition persists then please restore the data from a previous backup. Corruption details: Structure=%3, Corruption type = %4, Additional data = %5%n%6 |
Data Deduplication detected a corruption on file \"%1\" at offset (\"%2\"). If this condition persists then please restore the data from a previous backup. Corruption details: Structure=%3, Corruption type = %4, Additional data = %5%n%6 |
0xB0001013 | Data Deduplication encountered failure while reconciling chunk store on volume \"%1\". The error code was %2. Reconciliation is disabled for the current optimization job.%n%3 |
Data Deduplication encountered failure while reconciling chunk store on volume \"%1\". The error code was %2. Reconciliation is disabled for the current optimization job.%n%3 |
0xB0001016 | Data Deduplication encountered corrupted chunk container %1 while performing full garbage collection. The corrupted chunk container is skipped.%n%2 |
Data Deduplication encountered corrupted chunk container %1 while performing full garbage collection. The corrupted chunk container is skipped.%n%2 |
0xB0001017 | Data Deduplication could not initialize change log under %1. The error code was %2.%n%3 |
Data Deduplication could not initialize change log under %1. The error code was %2.%n%3 |
0xB0001018 | Data Deduplication service could not mark chunk container %1 as reconciled. The error code was %2.%n%3 |
Data Deduplication service could not mark chunk container %1 as reconciled. The error code was %2.%n%3 |
0xB0001019 | A Data Deduplication configuration file is corrupted. The system or volume may need to be restored from backup.%n%1 |
A Data Deduplication configuration file is corrupted. The system or volume may need to be restored from backup.%n%1 |
0xB000101A | Data Deduplication was unable to save one of the configuration stores on volume \"%1\" due to a disk-full error: If the disk is full, please clean it up (extend the volume or delete some files). If the disk is not full, but there is a hard quota on the volume root, please delete, disable or increase this quota.%n%2 |
Data Deduplication was unable to save one of the configuration stores on volume \"%1\" due to a disk-full error: If the disk is full, please clean it up (extend the volume or delete some files). If the disk is not full, but there is a hard quota on the volume root, please delete, disable or increase this quota.%n%2 |
0xB000101B | Data Deduplication could not access global configuration since the cluster service is not running. Please start the cluster service and retry the operation.%n%1 |
Data Deduplication could not access global configuration since the cluster service is not running. Please start the cluster service and retry the operation.%n%1 |
0xB000101C | Shadow copy \"%1\" was deleted during storage report generation. Volume \"%2\" might be configured with inadequate shadow copy storage area. Data Deduplication could not process this volume.%n%3 |
Shadow copy \"%1\" was deleted during storage report generation. Volume \"%2\" might be configured with inadequate shadow copy storage area. Data Deduplication could not process this volume.%n%3 |
0xB000101D | Shadow copy creation failed for volume \"%1\" after retrying for %2 minutes because other shadow copies were being created. Reschedule the Data Deduplication for a less busy time.%n%3 |
Shadow copy creation failed for volume \"%1\" after retrying for %2 minutes because other shadow copies were being created. Reschedule the Data Deduplication for a less busy time.%n%3 |
0xB000101E | Volume \"%1\" is not supported for shadow copy. It is possible that the volume was removed from the system. Data Deduplication service could not process this volume.%n%2 |
Volume \"%1\" is not supported for shadow copy. It is possible that the volume was removed from the system. Data Deduplication service could not process this volume.%n%2 |
0xB000101F | The volume \"%1\" has been deleted or removed from the system.%n%2 |
The volume \"%1\" has been deleted or removed from the system.%n%2 |
0xB0001020 | Shadow copy creation failed for volume \"%1\" with error %2. The volume might be configured with inadequate shadow copy storage area. File Serve Deduplication service could not process this volume.%n%3 |
Shadow copy creation failed for volume \"%1\" with error %2. The volume might be configured with inadequate shadow copy storage area. File Serve Deduplication service could not process this volume.%n%3 |
0xB0001021 | The file system on volume \"%1\" is potentially corrupted. Please run the CHKDSK utility to verify and fix the file system.%n%2 |
The file system on volume \"%1\" is potentially corrupted. Please run the CHKDSK utility to verify and fix the file system.%n%2 |
0xB0001022 | Data Deduplication detected an insecure internal folder. To secure the folder, reinstall deduplication on the volume again.%n%1 |
Data Deduplication detected an insecure internal folder. To secure the folder, reinstall deduplication on the volume again.%n%1 |
0xB0001023 | Data Deduplication could not find a chunk store on the volume.%n%1 |
Data Deduplication could not find a chunk store on the volume.%n%1 |
0xB0001024 | Data Deduplication detected multiple chunk store folders. To recover, reinstall deduplication on the volume.%n%1 |
Data Deduplication detected multiple chunk store folders. To recover, reinstall deduplication on the volume.%n%1 |
0xB0001025 | Data Deduplication detected conflicting chunk store folders: \"%1\" and \"%2\".%n%3 |
Data Deduplication detected conflicting chunk store folders: \"%1\" and \"%2\".%n%3 |
0xB0001026 | The data is invalid.%n%1 |
The data is invalid.%n%1 |
0xB0001027 | Data Deduplication scheduler failed to initialize with error \"%1\".%n%2 |
Data Deduplication scheduler failed to initialize with error \"%1\".%n%2 |
0xB0001028 | Data Deduplication failed to validate job type \"%1\" on volume \"%2\" with error \"%3\".%n%4 |
Data Deduplication failed to validate job type \"%1\" on volume \"%2\" with error \"%3\".%n%4 |
0xB0001029 | Data Deduplication failed to start job type \"%1\" on volume \"%2\" with error \"%3\".%n%4 |
Data Deduplication failed to start job type \"%1\" on volume \"%2\" with error \"%3\".%n%4 |
0xB000102C | Data Deduplication detected job type \"%1\" on volume \"%2\" uses too much memory. %3 MB is assigned. %4 MB is used.%n%5 |
Data Deduplication detected job type \"%1\" on volume \"%2\" uses too much memory. %3 MB is assigned. %4 MB is used.%n%5 |
0xB000102D | Data Deduplication detected job type \"%1\" on volume \"%2\" memory usage has dropped to desirable level.%n%3 |
Data Deduplication detected job type \"%1\" on volume \"%2\" memory usage has dropped to desirable level.%n%3 |
0xB000102E | Data Deduplication cancelled job type \"%1\" on volume \"%2\". It uses too much memory than the amount assigned to it.%n%3 |
Data Deduplication cancelled job type \"%1\" on volume \"%2\". It uses too much memory than the amount assigned to it.%n%3 |
0xB000102F | Data Deduplication cancelled job type \"%1\" on volume \"%2\". Memory resource is running low on the machine or in the job.%n%3 |
Data Deduplication cancelled job type \"%1\" on volume \"%2\". Memory resource is running low on the machine or in the job.%n%3 |
0xB0001030 | Data Deduplication job type \"%1\" on volume \"%2\" failed to report completion to the service with error: %3.%n%4 |
Data Deduplication job type \"%1\" on volume \"%2\" failed to report completion to the service with error: %3.%n%4 |
0xB0001031 | Data Deduplication detected a container cannot be compacted or updated because it has reached the maximum generation.%n%1 |
Data Deduplication detected a container cannot be compacted or updated because it has reached the maximum generation.%n%1 |
0xB0001032 | Data Deduplication corruption log \"%1\" is corrupted.%n%2 |
Data Deduplication corruption log \"%1\" is corrupted.%n%2 |
0xB0001033 | Data Deduplication corruption log \"%1\" has reached maximum allowed size \"%2\". Please run scrubbing job to process corruption log. No more corruptions will be reported until the log is processed.%n%3 |
Data Deduplication corruption log \"%1\" has reached maximum allowed size \"%2\". Please run scrubbing job to process corruption log. No more corruptions will be reported until the log is processed.%n%3 |
0xB0001034 | Data Deduplication corruption log \"%1\" has reached maximum allowed size \"%2\". No more corruptions will be reported until the log is processed.%n%3 |
Data Deduplication corruption log \"%1\" has reached maximum allowed size \"%2\". No more corruptions will be reported until the log is processed.%n%3 |
0xB0001035 | Data Deduplication scheduler failed to uninitialize with error \"%1\".%n%2 |
Data Deduplication scheduler failed to uninitialize with error \"%1\".%n%2 |
0xB0001036 | Data Deduplication detected a new container could not be created in a chunk store because it ran out of available container Id.%n%1 |
Data Deduplication detected a new container could not be created in a chunk store because it ran out of available container Id.%n%1 |
0xB0001037 | Data Deduplication full garbage collection phase 1 (cleaning file related metadata) on volume \"%1\" failed with error: %2. The job will continue with phase 2 execution (data chunk cleanup).%n%3 |
Data Deduplication full garbage collection phase 1 (cleaning file related metadata) on volume \"%1\" failed with error: %2. The job will continue with phase 2 execution (data chunk cleanup).%n%3 |
0xB0001039 | Data Deduplication full garbage collection could not achieve maximum space reclamation because delete logs for data container %1 could not be cleaned up.%n%2 |
Data Deduplication full garbage collection could not achieve maximum space reclamation because delete logs for data container %1 could not be cleaned up.%n%2 |
0xB000103A | Some files could not be deduplicated because of FSRM Quota violations on volume %1. Files skipped are likely compressed or sparse files in folders which are at quota or close to their quota limit. Please consider increasing the quota limit for folders that are at their quota limit or close to it.%n%2 |
Some files could not be deduplicated because of FSRM Quota violations on volume %1. Files skipped are likely compressed or sparse files in folders which are at quota or close to their quota limit. Please consider increasing the quota limit for folders that are at their quota limit or close to it.%n%2 |
0xB000103B | Data Deduplication failed to dedup file %1 \"%2\" due to fatal error %3%n%4 |
Data Deduplication failed to dedup file %1 \"%2\" due to fatal error %3%n%4 |
0xB000103C | Data Deduplication encountered corruption while accessing a file in chunk store.%n%1 |
Data Deduplication encountered corruption while accessing a file in chunk store.%n%1 |
0xB000103D | Data Deduplication encountered corruption while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1 |
Data Deduplication encountered corruption while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1 |
0xB000103E | Data Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1 |
Data Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store. Please run scrubbing job for diagnosis and repair.%n%1 |
0xB000103F | Data Deduplication is unable to access file %1 because the file is in use.%n%2 |
Data Deduplication is unable to access file %1 because the file is in use.%n%2 |
0xB0001040 | Data Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store.%n%1 |
Data Deduplication encountered checksum (CRC) mismatch error while accessing a file in chunk store.%n%1 |
0xB0001041 | Data Deduplication cannot run the job on volume %1 because the dedup store version compatiblity check failed with error %2.%n%3 |
Data Deduplication cannot run the job on volume %1 because the dedup store version compatiblity check failed with error %2.%n%3 |
0xB0001042 | Data Deduplication has disabled the volume %1 because it has discovered too many corruptions. Please run deep scrubbing on the volume.%n%2 |
Data Deduplication has disabled the volume %1 because it has discovered too many corruptions. Please run deep scrubbing on the volume.%n%2 |
0xB0001043 | Data Deduplication has detected a corrupt corruption metadata file on the store at %1. Please run deep scrubbing on the volume.%n%2 |
Data Deduplication has detected a corrupt corruption metadata file on the store at %1. Please run deep scrubbing on the volume.%n%2 |
0xB0001044 | Volume \"%1\" cannot be enabled for Data Deduplication. Data Deduplication does not support volumes larger than 64TB. Error: %2.%n%3 |
Volume \"%1\" cannot be enabled for Data Deduplication. Data Deduplication does not support volumes larger than 64TB. Error: %2.%n%3 |
0xB0001045 | Data Deduplication cannot be enabled on SIS volume \"%1\". Error: %2.%n%3 |
Data Deduplication cannot be enabled on SIS volume \"%1\". Error: %2.%n%3 |
0xB0001046 | File-system is configured for case-sensitive file/folder names. Data Deduplication does not support case-sensitive file-system mode.%n%1 |
File-system is configured for case-sensitive file/folder names. Data Deduplication does not support case-sensitive file-system mode.%n%1 |
0xB0001049 | Data Deduplication changed scrubbing job to read-only due to insufficient disk space.%n%1 |
Data Deduplication changed scrubbing job to read-only due to insufficient disk space.%n%1 |
0xB000104B | Data Deduplication has disabled the volume %1 because there are missing or corrupt containers. Please run deep scrubbing on the volume.%n%2 |
Data Deduplication has disabled the volume %1 because there are missing or corrupt containers. Please run deep scrubbing on the volume.%n%2 |
0xB000104D | Data Deduplication encountered a disk-full error.%n%1 |
Data Deduplication encountered a disk-full error.%n%1 |
0xB000104E | Data Deduplication job cannot run on volume \"%1\" due to insufficient disk space.%n%2 |
Data Deduplication job cannot run on volume \"%1\" due to insufficient disk space.%n%2 |
0xB000104F | Data Deduplication job cannot run on offline volume \"%1\".%n%2 |
Data Deduplication job cannot run on offline volume \"%1\".%n%2 |
0xB0001050 | Data Deduplication recovered a corrupt or missing file.%n%1 |
Data Deduplication recovered a corrupt or missing file.%n%1 |
0xB0001051 | Data Deduplication encountered a corrupted metadata file. To correct the problem, schedule or manually run a Garbage Collection job on the affected volume with the -Full option.%n%1 |
Data Deduplication encountered a corrupted metadata file. To correct the problem, schedule or manually run a Garbage Collection job on the affected volume with the -Full option.%n%1 |
0xB0001052 | Data Deduplication encountered chunk %1 with corrupted header while updating container. The corrupted chunk is replicated to the new container %2.%n%3 |
Data Deduplication encountered chunk %1 with corrupted header while updating container. The corrupted chunk is replicated to the new container %2.%n%3 |
0xB0001053 | Data Deduplication encountered chunk %1 with transient header corruption while updating container. The corrupted chunk is NOT replicated to the new container %2.%n%3 |
Data Deduplication encountered chunk %1 with transient header corruption while updating container. The corrupted chunk is NOT replicated to the new container %2.%n%3 |
0xB0001054 | Data Deduplication failed to read chunk container redirection table from file %1 with error %2.%n%3 |
Data Deduplication failed to read chunk container redirection table from file %1 with error %2.%n%3 |
0xB0001055 | Data Deduplication failed to initialize reparse point index table for deep scrubbing from file %1 with error %2.%n%3 |
Data Deduplication failed to initialize reparse point index table for deep scrubbing from file %1 with error %2.%n%3 |
0xB0001056 | Data Deduplication failed to deep scrub container file %1 on volume %2 with error %3.%n%4 |
Data Deduplication failed to deep scrub container file %1 on volume %2 with error %3.%n%4 |
0xB0001057 | Data Deduplication failed to load stream map log for deep scrubbing from file %1 with error %2.%n%3 |
Data Deduplication failed to load stream map log for deep scrubbing from file %1 with error %2.%n%3 |
0xB0001058 | Data Deduplication found a duplicate local chunk id %1 in container file %2.%n%3 |
Data Deduplication found a duplicate local chunk id %1 in container file %2.%n%3 |
0xB0001059 | Data Deduplication job type \"%1\" on volume \"%2\" was cancelled manually.%n%3 |
Data Deduplication job type \"%1\" on volume \"%2\" was cancelled manually.%n%3 |
0xB000105A | Scheduled data Deduplication job type \"%1\" on volume \"%2\" was cancelled.%n%3 |
Scheduled data Deduplication job type \"%1\" on volume \"%2\" was cancelled.%n%3 |
0xB000105D | The Data Deduplication chunk store statistics file on volume \"%1\" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2 |
The Data Deduplication chunk store statistics file on volume \"%1\" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2 |
0xB000105E | The Data Deduplication volume statistics file on volume \"%1\" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2 |
The Data Deduplication volume statistics file on volume \"%1\" is corrupted and will be reset. Statistics will be updated by a subsequent job and can be updated manually by running the Update-DedupStatus cmdlet.%n%2 |
0xB000105F | Data Deduplication failed to append to deep scrubbing log file %1 with error %2.%n%3 |
Data Deduplication failed to append to deep scrubbing log file %1 with error %2.%n%3 |
0xB0001060 | Data Deduplication encountered a failure during deep scrubbing on store %1 with error %2.%n%3 |
Data Deduplication encountered a failure during deep scrubbing on store %1 with error %2.%n%3 |
0xB0001061 | Data Deduplication cancelled job type \"%1\" on volume \"%2\". The job violated Csv dedup job placement policy.%n%3 |
Data Deduplication cancelled job type \"%1\" on volume \"%2\". The job violated Csv dedup job placement policy.%n%3 |
0xB0001062 | Data Deduplication cancelled job type \"%1\" on volume \"%2\". The csv job monitor has been uninitialized.%n%3 |
Data Deduplication cancelled job type \"%1\" on volume \"%2\". The csv job monitor has been uninitialized.%n%3 |
0xB0001063 | Data Deduplication encountered a IO device error while accessing a file on the volume. This is likely a hardware fault in the storage subsystem.%n%1 |
Data Deduplication encountered a IO device error while accessing a file on the volume. This is likely a hardware fault in the storage subsystem.%n%1 |
0xB0001064 | Data Deduplication encountered an unexpected error. If this is a cluster, verify Data Deduplication is enabled on all nodes of the cluster.%n%1 |
Data Deduplication encountered an unexpected error. If this is a cluster, verify Data Deduplication is enabled on all nodes of the cluster.%n%1 |
0xB0001065 | Attempted to disable data access for data deduplicated CSV volume \"%1\" without maintenance mode. Data access can only be disabled for a CSV volume when in maintenance mode. Place volume into maintenance mode and retry.%n%2 |
Attempted to disable data access for data deduplicated CSV volume \"%1\" without maintenance mode. Data access can only be disabled for a CSV volume when in maintenance mode. Place volume into maintenance mode and retry.%n%2 |
0xB0001800 | Data Deduplication service could not unoptimize file \"%5%6%7\". Error %8, \"%9\". |
Data Deduplication service could not unoptimize file \"%5%6%7\". Error %8, \"%9\". |
0xB0001801 | Data Deduplication service failed to unoptimize too many files %3. Some files are not reported. |
Data Deduplication service failed to unoptimize too many files %3. Some files are not reported. |
0xB0001802 | Data Deduplication service has finished unoptimization on volume %3 with no errors. |
Data Deduplication service has finished unoptimization on volume %3 with no errors. |
0xB0001803 | Data Deduplication service has finished unoptimization on volume %3 with %4 errors. |
Data Deduplication service has finished unoptimization on volume %3 with %4 errors. |
0xB0001804 | %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10 |
%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10 |
0xB0001805 | %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nPriority: %7%nFull: %8%nVolume free space (MB): %9 |
%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable cores: %6%nPriority: %7%nFull: %8%nVolume free space (MB): %9 |
0xB0001806 | %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6%nFull: %7%nRead-only: %8 |
%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6%nFull: %7%nRead-only: %8 |
0xB0001807 | %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6 |
%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nPriority: %6 |
0xB0001809 | %1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nIn-policy file count: %12%nJob processed space (MB): %13%nJob elapsed time (seconds): %18%nJob throughput (MB/second): %19%nChurn processing throughput (MB/second): %20 |
%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nIn-policy file count: %12%nJob processed space (MB): %13%nJob elapsed time (seconds): %18%nJob throughput (MB/second): %19%nChurn processing throughput (MB/second): %20 |
0xB000180A | %1 job has completed.%n%nFull: %2%nVolume: %5 (%4)%nError code: %6%nError message: %7%nFreed up space (MB): %8%nVolume free space (MB): %9%nJob elapsed time (seconds): %10%nJob throughput (MB/second): %11 |
%1 job has completed.%n%nFull: %2%nVolume: %5 (%4)%nError code: %6%nError message: %7%nFreed up space (MB): %8%nVolume free space (MB): %9%nJob elapsed time (seconds): %10%nJob throughput (MB/second): %11 |
0xB000180B | %1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6 |
%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6 |
0xB000180C | %1 job has completed.%n%nFull: %2%nRead-only: %3%nVolume: %6 (%5)%nError code: %7%nError message: %8%nTotal corruption count: %9%nFixable corruption count: %10%n%nWhen corruptions are found, check more details in Scrubbing event channel. |
%1 job has completed.%n%nFull: %2%nRead-only: %3%nVolume: %6 (%5)%nError code: %7%nError message: %8%nTotal corruption count: %9%nFixable corruption count: %10%n%nWhen corruptions are found, check more details in Scrubbing event channel. |
0xB000180D | %1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nUnoptimized file count: %7%nJob processed space (MB): %8%nJob elapsed time (seconds): %9%nJob throughput (MB/second): %10 |
%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nUnoptimized file count: %7%nJob processed space (MB): %8%nJob elapsed time (seconds): %9%nJob throughput (MB/second): %10 |
0xB000180E | %1 job has been queued.%n%nVolume: %4 (%3)%nSystem memory percent: %5 %nPriority: %6%nSchedule mode: %7 |
%1 job has been queued.%n%nVolume: %4 (%3)%nSystem memory percent: %5 %nPriority: %6%nSchedule mode: %7 |
0xB000181C | Restore of deduplicated file \"%1\" failed with the following error: %2, \"%3\". |
Restore of deduplicated file \"%1\" failed with the following error: %2, \"%3\". |
0xB000181D | Priority %1 job has started.%n%nVolume: %4 (%3)%nFile ID: %11%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10 |
Priority %1 job has started.%n%nVolume: %4 (%3)%nFile ID: %11%nAvailable memory: %5 MB%nAvailable cores: %6%nInstances: %7%nReaders per instance: %8%nPriority: %9%nIoThrottle: %10 |
0xB000181E | %1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable threads: %6%nPriority: %7 |
%1 job has started.%n%nVolume: %4 (%3)%nAvailable memory: %5 MB%nAvailable threads: %6%nPriority: %7 |
0xB000181F | %1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nChunk lookup count: %12%nInserted chunk count: %13%nInserted chunks logical data (MB): %14%nInserted chunks physical data (MB): %15%nCommitted stream count: %16%nCommitted stream entry count: %17%nCommitted stream logical data (MB): %18%nRetrieved chunks physical data (MB): %19%nRetrieved stream logical data (MB): %20%nDataPort time (seconds): %21%nJob elapsed time (seconds): %22%nIngress throughput (MB/second): %23%nEgress throughput (MB/second): %24 |
%1 job has completed.%n%nVolume: %4 (%3)%nError code: %5%nError message: %6%nSavings rate (percent): %7%nSaved space (MB): %8%nVolume used space (MB): %9%nVolume free space (MB): %10%nOptimized file count: %11%nChunk lookup count: %12%nInserted chunk count: %13%nInserted chunks logical data (MB): %14%nInserted chunks physical data (MB): %15%nCommitted stream count: %16%nCommitted stream entry count: %17%nCommitted stream logical data (MB): %18%nRetrieved chunks physical data (MB): %19%nRetrieved stream logical data (MB): %20%nDataPort time (seconds): %21%nJob elapsed time (seconds): %22%nIngress throughput (MB/second): %23%nEgress throughput (MB/second): %24 |
0xB0001821 | Data Deduplication detected a non-clustered volume specified for the chunk index cache volume in a clustered deployment. The configuration is not recommended because it may result in job failures after failover.%n%nVolume: %3 (%2) |
Data Deduplication detected a non-clustered volume specified for the chunk index cache volume in a clustered deployment. The configuration is not recommended because it may result in job failures after failover.%n%nVolume: %3 (%2) |
0xB0002000 | Data Deduplication detected job type \"%1\" on volume \"%2\" working set is low. Ratio to commit size is %3.%n%4 |
Data Deduplication detected job type \"%1\" on volume \"%2\" working set is low. Ratio to commit size is %3.%n%4 |
0xB0002001 | Data Deduplication detected job type \"%1\" on volume \"%2\" working set has recovered to desirable level.%n%3 |
Data Deduplication detected job type \"%1\" on volume \"%2\" working set has recovered to desirable level.%n%3 |
0xB0002002 | Data Deduplication detected job type \"%1\" on volume \"%2\" page fault rate is high. The rate is %3 page faults per second.%n%4 |
Data Deduplication detected job type \"%1\" on volume \"%2\" page fault rate is high. The rate is %3 page faults per second.%n%4 |
0xB0002003 | Data Deduplication detected job type \"%1\" on volume \"%2\" page fault rate has lowered to desirable level. The rate is %3 page faults per second.%n%4 |
Data Deduplication detected job type \"%1\" on volume \"%2\" page fault rate has lowered to desirable level. The rate is %3 page faults per second.%n%4 |
0xB0002004 | Data Deduplication failed to dedup file \"%1\" with file ID %2 due to non-fatal error %3%n%4.%n%nNote: You can retrieve the file name by running the command FSUTIL FILE QUERYFILENAMEBYID on the file in question. |
Data Deduplication failed to dedup file \"%1\" with file ID %2 due to non-fatal error %3%n%4.%n%nNote: You can retrieve the file name by running the command FSUTIL FILE QUERYFILENAMEBYID on the file in question. |
0xB000200C | Data Deduplication has aborted a group commit session.%n%nFile count: %1%nError: %2%n%3 |
Data Deduplication has aborted a group commit session.%n%nFile count: %1%nError: %2%n%3 |
0xB000201C | Fail to open dedup setting registry key |
Fail to open dedup setting registry key |
0xB000201D | Data Deduplication failed to dedup file \"%1\" with file ID %2 due to oplock break%n%3 |
Data Deduplication failed to dedup file \"%1\" with file ID %2 due to oplock break%n%3 |
0xB000201E | Data Deduplication failed to load hotspot table from file %1 due to error %2.%n%3 |
Data Deduplication failed to load hotspot table from file %1 due to error %2.%n%3 |
0xB000201F | Data Deduplication failed to initialize oplock.%n%nFile ID: %1%nFile name: \"%2\"%nError: %3%n%4 |
Data Deduplication failed to initialize oplock.%n%nFile ID: %1%nFile name: \"%2\"%nError: %3%n%4 |
0xB0002020 | Data Deduplication while running job on volume %1 detected invalid physical sector size %2. Using default value %3.%n%4 |
Data Deduplication while running job on volume %1 detected invalid physical sector size %2. Using default value %3.%n%4 |
0xB0002021 | Data Deduplication detected an unsupported chunk store container.%n%1 |
Data Deduplication detected an unsupported chunk store container.%n%1 |
0xB0002022 | Data Deduplication could not create window to receive task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2 |
Data Deduplication could not create window to receive task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2 |
0xB0002023 | Data Deduplication could not create thread to poll for task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2 |
Data Deduplication could not create thread to poll for task scheduler stop message due to error %1. Task(s) may not stop after duration limit.%n%2 |
0xB0002024 | An attempt was made to perform an initialization operation when initialization has already been completed.%n%1 |
An attempt was made to perform an initialization operation when initialization has already been completed.%n%1 |
0xB0002028 | Data Deduplication created emergency file %1.%n%3 |
Data Deduplication created emergency file %1.%n%3 |
0xB0002029 | Data Deduplication failed to create emergency file %1 with error %2.%n%3 |
Data Deduplication failed to create emergency file %1 with error %2.%n%3 |
0xB000202A | Data Deduplication deleted emergency file %1.%n%3 |
Data Deduplication deleted emergency file %1.%n%3 |
0xB000202B | Data Deduplication failed to delete emergency file %1 with error %2.%n%3 |
Data Deduplication failed to delete emergency file %1 with error %2.%n%3 |
0xB000202C | Data Deduplication detected a chunk store container with misaligned valid data length.%n%1 |
Data Deduplication detected a chunk store container with misaligned valid data length.%n%1 |
0xB000202D | Data Deduplication Garbage Collection encountered a delete log entry with an invalid stream map signature for stream map Id %1.%n%2 |
Data Deduplication Garbage Collection encountered a delete log entry with an invalid stream map signature for stream map Id %1.%n%2 |
0xB000202E | Data Deduplication failed to initialize oplock as the file appears to be missing.%n%nFile ID: %1%nFile name: \"%2\"%nError: %3%n%4 |
Data Deduplication failed to initialize oplock as the file appears to be missing.%n%nFile ID: %1%nFile name: \"%2\"%nError: %3%n%4 |
0xB000202F | Data Deduplication skipped too many file-level errors. We will not log more than %1 file-level errors per job.%n%2 |
Data Deduplication skipped too many file-level errors. We will not log more than %1 file-level errors per job.%n%2 |
0xB0002030 | Data Deduplication diagnostic warning.%n%n%1%n%2 |
Data Deduplication diagnostic warning.%n%n%1%n%2 |
0xB0002031 | Data Deduplication diagnostic information.%n%n%1%n%2 |
Data Deduplication diagnostic information.%n%n%1%n%2 |
0xB0002032 | Data Deduplication found file %1 with a stream map id %2 in container file %3 marked for deletion.%n%4 |
Data Deduplication found file %1 with a stream map id %2 in container file %3 marked for deletion.%n%4 |
0xB0002033 | Failed to enqueue job of type \"%1\" on volume \"%2\".%n%3 |
Failed to enqueue job of type \"%1\" on volume \"%2\".%n%3 |
0xB0002034 | Error terminating job host process for job type \"%1\" on volume \"%2\" (process id: %3).%n%4 |
Error terminating job host process for job type \"%1\" on volume \"%2\" (process id: %3).%n%4 |
0xB0002035 | Data Deduplication encountered corrupted chunk %1 while updating container. Corrupted data that cannot be repaired will be copied as-is to the new container %2.%n%3 |
Data Deduplication encountered corrupted chunk %1 while updating container. Corrupted data that cannot be repaired will be copied as-is to the new container %2.%n%3 |
0xB0002036 | Data Deduplication job type \"%1\" on volume \"%2\" failed to exit gracefully.%n%3 |
Data Deduplication job type \"%1\" on volume \"%2\" failed to exit gracefully.%n%3 |
0xB0002037 | Data Deduplication job host for job type \"%1\" on volume \"%2\" exited unexpectedly.%n%3 |
Data Deduplication job host for job type \"%1\" on volume \"%2\" exited unexpectedly.%n%3 |
0xB0002038 | Data Deduplication has failed to load corruption metadata file on the store at %1 due to error %2. Please run deep scrubbing on the volume.%n%3 |
Data Deduplication has failed to load corruption metadata file on the store at %1 due to error %2. Please run deep scrubbing on the volume.%n%3 |
0xB0002039 | Data Deduplication full garbage collection phase 1 on volume \"%1\" encountered an error %2 while processing file %3. Phase 1 will need to be aborted since garbage collection of file-related metadata is unsafe to continue on file errors.%n%4 |
Data Deduplication full garbage collection phase 1 on volume \"%1\" encountered an error %2 while processing file %3. Phase 1 will need to be aborted since garbage collection of file-related metadata is unsafe to continue on file errors.%n%4 |
0xB000203A | Data Deduplication has failed to process corruption metadata file %1 due to error %2. Please run deep scrubbing on the volume.%n%3 |
Data Deduplication has failed to process corruption metadata file %1 due to error %2. Please run deep scrubbing on the volume.%n%3 |
0xB000203B | Data Deduplication has failed to load a corrupted metadata file %1 due to error %2. Deleting the file and continuing.%n%3 |
Data Deduplication has failed to load a corrupted metadata file %1 due to error %2. Deleting the file and continuing.%n%3 |
0xB000203C | Data Deduplication has failed to set NTFS allocation size for container file %1 due to error %2.%n%3 |
Data Deduplication has failed to set NTFS allocation size for container file %1 due to error %2.%n%3 |
0xB000203D | Data Deduplication configured to use BCrypt provider '%1' for hash algorithm %2.%n%3 |
Data Deduplication configured to use BCrypt provider '%1' for hash algorithm %2.%n%3 |
0xB000203E | Data Deduplication could not use BCrypt provider '%1' for hash algorithm %2 due to an error in operation %3. Reverting to the Microsoft primitive CNG provider.%n%4 |
Data Deduplication could not use BCrypt provider '%1' for hash algorithm %2 due to an error in operation %3. Reverting to the Microsoft primitive CNG provider.%n%4 |
0xB000203F | Data Deduplication failed to include file \"%1\" in file metadata analysis calculations.%n%2 |
Data Deduplication failed to include file \"%1\" in file metadata analysis calculations.%n%2 |
0xB0002040 | Data Deduplication failed to include stream map %1 in file metadata analysis calculations.%n%2 |
Data Deduplication failed to include stream map %1 in file metadata analysis calculations.%n%2 |
0xB0002041 | Data Deduplication encountered an error for file \"%1\" while scanning files and folders.%n%2 |
Data Deduplication encountered an error for file \"%1\" while scanning files and folders.%n%2 |
0xB0002042 | Data Deduplication encountered an error while attempting to resume processing. Please consult the event log parameters for more details about the current file being processed.%n%1 |
Data Deduplication encountered an error while attempting to resume processing. Please consult the event log parameters for more details about the current file being processed.%n%1 |
0xB0002043 | Data Deduplication encountered an error %1 whle scanning usn journal on volume %2 for updating hot range tracking.%n%3 |
Data Deduplication encountered an error %1 whle scanning usn journal on volume %2 for updating hot range tracking.%n%3 |
0xB0002044 | Data Deduplication could not truncate the stream of an optimized file. No action is required. Error: %1%n%n%2 |
Data Deduplication could not truncate the stream of an optimized file. No action is required. Error: %1%n%n%2 |
0xB0002800 | %1 job memory requirements.%n%nVolume: %4 (%3)%nMinimum memory: %5 MB%nMaximum memory: %6 MB%nMinimum disk: %7 MB%nMaximum cores: %8 |
%1 job memory requirements.%n%nVolume: %4 (%3)%nMinimum memory: %5 MB%nMaximum memory: %6 MB%nMinimum disk: %7 MB%nMaximum cores: %8 |
0xB0002801 | %1 reconciliation has started.%n%nVolume: %4 (%3) |
%1 reconciliation has started.%n%nVolume: %4 (%3) |
0xB0002802 | %1 reconciliation has completed.%n%nGuidance: This event is expected when Reconciliation has completed, there is no recommended or required action. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. %n%nVolume: %4 (%3)%nReconciled containers: %5%nUnreconciled containers: %6%nCatchup references: %7%nCatchup containers: %8%nReconciled references: %9%nReconciled containers: %10%nCross-reconciled references: %11%nCross-reconciled containers: %12%nError code: %13%nError message: %14 |
%1 reconciliation has completed.%n%nGuidance: This event is expected when Reconciliation has completed, there is no recommended or required action. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. %n%nVolume: %4 (%3)%nReconciled containers: %5%nUnreconciled containers: %6%nCatchup references: %7%nCatchup containers: %8%nReconciled references: %9%nReconciled containers: %10%nCross-reconciled references: %11%nCross-reconciled containers: %12%nError code: %13%nError message: %14 |
0xB0002803 | %1 job on volume %4 (%3) was configured with insufficient memory.%n%nSystem memory percentage: %5%nAvailable memory: %8 MB%nMinimum required memory: %6 MB |
%1 job on volume %4 (%3) was configured with insufficient memory.%n%nSystem memory percentage: %5%nAvailable memory: %8 MB%nMinimum required memory: %6 MB |
0xB0002804 | Optimization memory details for %1 job on volume %3 (%2). |
Optimization memory details for %1 job on volume %3 (%2). |
0xB0002805 | An open file was skipped during optimization. No action is required.%n%nFileId: %2%nSkip Reason: %1 |
An open file was skipped during optimization. No action is required.%n%nFileId: %2%nSkip Reason: %1 |
0xB0002806 | An operation succeeded after one or more retries. Operation: %1; FileId: %3; Number of retries: %2 |
An operation succeeded after one or more retries. Operation: %1; FileId: %3; Number of retries: %2 |
0xB0002807 | Data Deduplication aborted the optimization pipeline.%nVolumePath: %1%nErrorCode: %2%nErrorMessage: %3Details: %4 |
Data Deduplication aborted the optimization pipeline.%nVolumePath: %1%nErrorCode: %2%nErrorMessage: %3Details: %4 |
0xB0002808 | Data Deduplication aborted a file.%nFileId: %1%nFilePath: %2%nFileSize: %3%nFlags: %4%nTotalRanges: %5%nSkippedRanges: %6%nAbortedRanges: %7%nCommittedRanges: %8%nErrorCode: %9%nErrorMessage: %10Details: %11 |
Data Deduplication aborted a file.%nFileId: %1%nFilePath: %2%nFileSize: %3%nFlags: %4%nTotalRanges: %5%nSkippedRanges: %6%nAbortedRanges: %7%nCommittedRanges: %8%nErrorCode: %9%nErrorMessage: %10Details: %11 |
0xB0002809 | Data Deduplication aborted a file range.%nFileId: %1%nFilePath: %2%nRangeOffset: %3%nRangeLength: %4%nErrorCode: %5%nErrorMessage: %6Details: %7 |
Data Deduplication aborted a file range.%nFileId: %1%nFilePath: %2%nRangeOffset: %3%nRangeLength: %4%nErrorCode: %5%nErrorMessage: %6Details: %7 |
0xB000280A | Data Deduplication aborted a session.%nMaxSize: %1%nCurrentSize: %2%nRemainingRanges: %3%nErrorCode: %4%nErrorMessage: %5Details: %6 |
Data Deduplication aborted a session.%nMaxSize: %1%nCurrentSize: %2%nRemainingRanges: %3%nErrorCode: %4%nErrorMessage: %5Details: %6 |
0xB000280B | USN journal created.%n%nVolume: %2 (%1)%nMaximum size %3 MB%nAllocation size %4 MB |
USN journal created.%n%nVolume: %2 (%1)%nMaximum size %3 MB%nAllocation size %4 MB |
0xB000280C | DataPort memory details for %1 job on volume %3 (%2). |
DataPort memory details for %1 job on volume %3 (%2). |
0xB000280D | Data deduplication detected a file with an ID that is not supported. Files with identifiers unpackable into 64-bits will be skipped. FileId: %1 FileName: %2 |
Data deduplication detected a file with an ID that is not supported. Files with identifiers unpackable into 64-bits will be skipped. FileId: %1 FileName: %2 |
0xB000280E | Reconciliation should be run to ensure optimal savings.%n%nGuidance: This event is expected when Reconciliation is turned off for the DataPort job. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. When Reconciliation would require 50% or more of the memory on the system, it is recommended that you (temporarily) cease running a DataPort job against this volume, and run an Optimization job. If Reconciliation is not run through an Optimization job before Reconciliation would require more than 100% of system memory, Reconciliation will not be able to be run again (unless more memory is added). This would result in permanent decreased space efficiency on this volume.%n%nVolume: %2 (%1)%nMemory percentage required: %3 |
Reconciliation should be run to ensure optimal savings.%n%nGuidance: This event is expected when Reconciliation is turned off for the DataPort job. Reconciliation is an internal process that allows Optimization and DataPort jobs to run when the entire Deduplication chunk index cannot be loaded into memory. When Reconciliation would require 50% or more of the memory on the system, it is recommended that you (temporarily) cease running a DataPort job against this volume, and run an Optimization job. If Reconciliation is not run through an Optimization job before Reconciliation would require more than 100% of system memory, Reconciliation will not be able to be run again (unless more memory is added). This would result in permanent decreased space efficiency on this volume.%n%nVolume: %2 (%1)%nMemory percentage required: %3 |
0xB000280F | Data Deduplication optimization job will not run the reconciliation step due to inadequate memory.%n%nGuidance: Deduplication savings will be suboptimal until the optimization job is provided more memory, or more more memory is added to the system.%n%nVolume: %2 (%1)%nMemory percentage required: %3 |
Data Deduplication optimization job will not run the reconciliation step due to inadequate memory.%n%nGuidance: Deduplication savings will be suboptimal until the optimization job is provided more memory, or more more memory is added to the system.%n%nVolume: %2 (%1)%nMemory percentage required: %3 |
0xB0003200 | Data Deduplication service detected corruption in \"%5%6%7\". The corruption cannot be repaired. |
Data Deduplication service detected corruption in \"%5%6%7\". The corruption cannot be repaired. |
0xB0003201 | Data Deduplication service detected corruption (%7) in \"%6\". See the event details for more information. |
Data Deduplication service detected corruption (%7) in \"%6\". See the event details for more information. |
0xB0003202 | Data Deduplication service detected a corrupted item (%11 - %13, %8, %9, %10, %12) in Deduplication Chunk Store on volume %4. See the event details for more information. |
Data Deduplication service detected a corrupted item (%11 - %13, %8, %9, %10, %12) in Deduplication Chunk Store on volume %4. See the event details for more information. |
0xB0003203 | Data Deduplication service has finished scrubbing on volume %3. It did not find any corruption since the last scrubbing. |
Data Deduplication service has finished scrubbing on volume %3. It did not find any corruption since the last scrubbing. |
0xB0003204 | Data Deduplication service found %4 corruption(s) on volume %3. All corruptions are fixed. |
Data Deduplication service found %4 corruption(s) on volume %3. All corruptions are fixed. |
0xB0003205 | Data Deduplication service found %4 corruption(s) on volume %3. %5 corruption(s) are fixed. %6 user file(s) are corrupted. %7 user file(s) are fixed. For the corrupted file list, see the Microsoft/Windows/Deduplication/Scrubbing events. |
Data Deduplication service found %4 corruption(s) on volume %3. %5 corruption(s) are fixed. %6 user file(s) are corrupted. %7 user file(s) are fixed. For the corrupted file list, see the Microsoft/Windows/Deduplication/Scrubbing events. |
0xB0003206 | Data Deduplication service found too many corruptions on volume %3. Some corruptions are not reported. |
Data Deduplication service found too many corruptions on volume %3. Some corruptions are not reported. |
0xB0003211 | Data Deduplication service has finished scrubbing on volume %3. See the event details for more information. |
Data Deduplication service has finished scrubbing on volume %3. See the event details for more information. |
0xB0003212 | Data Deduplication service encountered error while processing file \"%5%6%7\". The error was %8. |
Data Deduplication service encountered error while processing file \"%5%6%7\". The error was %8. |
0xB0003213 | Data Deduplication service encountered too many errors while processing file on volume %3. The threshold was %4. Some user file corruptions may not be reported. |
Data Deduplication service encountered too many errors while processing file on volume %3. The threshold was %4. Some user file corruptions may not be reported. |
0xB0003214 | Data Deduplication service encountered error while detecting corruptions in chunk store on volume %3. The error was %4. The job is aborted. |
Data Deduplication service encountered error while detecting corruptions in chunk store on volume %3. The error was %4. The job is aborted. |
0xB0003216 | Data Deduplication service encountered error while loading corruption logs on volume %3. The error was %4. The job continues. Some corruptions may not be detected. |
Data Deduplication service encountered error while loading corruption logs on volume %3. The error was %4. The job continues. Some corruptions may not be detected. |
0xB0003217 | Data Deduplication service encountered error while cleaning up corruption logs on volume %3. The error was %4. Some corruptions may be reported again next time. |
Data Deduplication service encountered error while cleaning up corruption logs on volume %3. The error was %4. Some corruptions may be reported again next time. |
0xB0003218 | Data Deduplication service encountered error while loading hotspots mapping from chunk store on volume %3. The error was %4. Some corruptions may not be repaired. |
Data Deduplication service encountered error while loading hotspots mapping from chunk store on volume %3. The error was %4. Some corruptions may not be repaired. |
0xB0003219 | Data Deduplication service encountered error while determining corrupted user files on volume %3. The error was %4. Some user file corruptions may not be reported. |
Data Deduplication service encountered error while determining corrupted user files on volume %3. The error was %4. Some user file corruptions may not be reported. |
0xB000321A | Data Deduplication service found %4 corruption(s) on volume %3. %6 user file(s) are corrupted. %7 user file(s) are fixable. Please run scrubbing job in read-write mode to attempt fixing reported corruptions. |
Data Deduplication service found %4 corruption(s) on volume %3. %6 user file(s) are corrupted. %7 user file(s) are fixable. Please run scrubbing job in read-write mode to attempt fixing reported corruptions. |
0xB000321B | Data Deduplication service fixed corruption in \"%5%6%7\". |
Data Deduplication service fixed corruption in \"%5%6%7\". |
0xB000321C | Data Deduplication service detected fixable corruption in \"%5%6%7\". Please run scrubbing job in read-write mode to fix this corruption. |
Data Deduplication service detected fixable corruption in \"%5%6%7\". Please run scrubbing job in read-write mode to fix this corruption. |
0xB000321E | Data Deduplication service encountered error while repairing corruptions on volume %3. The error was %4. The repair is unsuccessful. |
Data Deduplication service encountered error while repairing corruptions on volume %3. The error was %4. The repair is unsuccessful. |
0xB000321F | Data Deduplication service detected a corrupted item (%6, %7, %8, %9) in Deduplication Chunk Store on volume %4. See the event details for more information. |
Data Deduplication service detected a corrupted item (%6, %7, %8, %9) in Deduplication Chunk Store on volume %4. See the event details for more information. |
0xB0003220 | Container (%8,%9) with user data is missing from the chunk store. Missing container may result from incomplete restore, incomplete migration or file-system corruption. Volume is disabled from further optimization. It is recommended to restore the volume prior to enabling the volume for further optimization. |
Container (%8,%9) with user data is missing from the chunk store. Missing container may result from incomplete restore, incomplete migration or file-system corruption. Volume is disabled from further optimization. It is recommended to restore the volume prior to enabling the volume for further optimization. |
0xB0003221 | Data Deduplication service encountered error while scaning dedup user files on volume %3. The error was %4. Some user file corruptions may not be reported. |
Data Deduplication service encountered error while scaning dedup user files on volume %3. The error was %4. Some user file corruptions may not be reported. |
0xB0003224 | Data Deduplication service detected potential data loss (%9) in \"%6\" due to sharing reparse data with file \"%8\". See the event details for more information. |
Data Deduplication service detected potential data loss (%9) in \"%6\" due to sharing reparse data with file \"%8\". See the event details for more information. |
0xB0003225 | Container (%8,%9) with user data is corrupt in the chunk store. It is recommended to restore the volume prior to enabling the volume for further optimization. |
Container (%8,%9) with user data is corrupt in the chunk store. It is recommended to restore the volume prior to enabling the volume for further optimization. |
0xB0005000 | Open stream store stream (StartingChunkId %1, FileId %2) |
Open stream store stream (StartingChunkId %1, FileId %2) |
0xB0005001 | Open stream store stream completed %1 |
Open stream store stream completed %1 |
0xB0005002 | Prepare for paging IO (Stream %1, FileId %2) |
Prepare for paging IO (Stream %1, FileId %2) |
0xB0005003 | Prepare for paging IO completed %1 |
Prepare for paging IO completed %1 |
0xB0005005 | Read stream map completed %1 |
Read stream map completed %1 |
0xB0005006 | Read chunks (Stream %1, FileId %2, IoType %3, FirstRequestChunkId %4, NextRequest %5) |
Read chunks (Stream %1, FileId %2, IoType %3, FirstRequestChunkId %4, NextRequest %5) |
0xB0005007 | Read chunks completed %1 |
Read chunks completed %1 |
0xB0005008 | Compute checksum (ItemType %1, DataSize %2) |
Compute checksum (ItemType %1, DataSize %2) |
0xB0005009 | Compute checksum completed %1 |
Compute checksum completed %1 |
0xB000500A | Get container entry (ContainerId %1, Generation %2) |
Get container entry (ContainerId %1, Generation %2) |
0xB000500B | Get container entry completed %1 |
Get container entry completed %1 |
0xB000500C | Get maximum generation for container (ContainerId %1, Generation %2) |
Get maximum generation for container (ContainerId %1, Generation %2) |
0xB000500D | Get maximum generation for container completed %1 |
Get maximum generation for container completed %1 |
0xB000500E | Open chunk container (ContainerId %1, Generation %2, RootPath %4) |
Open chunk container (ContainerId %1, Generation %2, RootPath %4) |
0xB000500F | Open chunk container completed %1 |
Open chunk container completed %1 |
0xB0005010 | Initialize chunk container redirection table (ContainerId %1, Generation %2) |
Initialize chunk container redirection table (ContainerId %1, Generation %2) |
0xB0005011 | Initialize chunk container redirection table completed %1 |
Initialize chunk container redirection table completed %1 |
0xB0005012 | Validate chunk container redirection table (ContainerId %1, Generation %2) |
Validate chunk container redirection table (ContainerId %1, Generation %2) |
0xB0005013 | Validate chunk container redirection table completed %1 |
Validate chunk container redirection table completed %1 |
0xB0005014 | Get chunk container valid data length (ContainerId %1, Generation %2) |
Get chunk container valid data length (ContainerId %1, Generation %2) |
0xB0005015 | Get chunk container valid data length completed %1 |
Get chunk container valid data length completed %1 |
0xB0005016 | Get offset from chunk container redirection table (ContainerId %1, Generation %2) |
Get offset from chunk container redirection table (ContainerId %1, Generation %2) |
0xB0005017 | Get offset from chunk container redirection table completed %1 |
Get offset from chunk container redirection table completed %1 |
0xB0005018 | Read chunk container block (ContainerId %1, Generation %2, Buffer %3, Offset %4, Length %5, IoType %6, Synchronous %7) |
Read chunk container block (ContainerId %1, Generation %2, Buffer %3, Offset %4, Length %5, IoType %6, Synchronous %7) |
0xB0005019 | Read chunk container block completed %1 |
Read chunk container block completed %1 |
0xB000501A | Clear chunk container block (Buffer %1, Size %2, BufferType %3) |
Clear chunk container block (Buffer %1, Size %2, BufferType %3) |
0xB000501B | Clear chunk container block completed %1 |
Clear chunk container block completed %1 |
0xB000501C | Copy chunk (Buffer %1, Size %2, BufferType %3, BufferOffset %4, OutputCapacity %5) |
Copy chunk (Buffer %1, Size %2, BufferType %3, BufferOffset %4, OutputCapacity %5) |
0xB000501D | Copy chunk completed %1 |
Copy chunk completed %1 |
0xB000501E | Initialize file cache (UnderlyingFileObject %1, CacheFileSize %2) |
Initialize file cache (UnderlyingFileObject %1, CacheFileSize %2) |
0xB000501F | Initialize file cache completed %1 |
Initialize file cache completed %1 |
0xB0005020 | Map file cache data (CacheFileObject %1, Offset %2, Length %3) |
Map file cache data (CacheFileObject %1, Offset %2, Length %3) |
0xB0005021 | Map file cache data completed %1 |
Map file cache data completed %1 |
0xB0005022 | Unpin file cache data(Bcb %1) |
Unpin file cache data(Bcb %1) |
0xB0005023 | Unpin file cache data completed %1 |
Unpin file cache data completed %1 |
0xB0005024 | Copy file cache data (CacheFileObject %1, Offset %2, Length %3) |
Copy file cache data (CacheFileObject %1, Offset %2, Length %3) |
0xB0005025 | Copy file cache data completed %1 |
Copy file cache data completed %1 |
0xB0005026 | Read underlying file cache data (CacheFileObject %1, UnderlyingFileObject %2, Offset %3, Length %4) |
Read underlying file cache data (CacheFileObject %1, UnderlyingFileObject %2, Offset %3, Length %4) |
0xB0005027 | Read underlying file cache data completed %1 |
Read underlying file cache data completed %1 |
0xB0005028 | Get chunk container file size (ContainerId %1, Generation %2) |
Get chunk container file size (ContainerId %1, Generation %2) |
0xB0005029 | Get chunk container file size completed %1 |
Get chunk container file size completed %1 |
0xB000502A | Pin stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4) |
Pin stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4) |
0xB000502B | Pin stream map completed %1 |
Pin stream map completed %1 |
0xB000502C | Pin chunk container (ContainerId %1, Generation %2) |
Pin chunk container (ContainerId %1, Generation %2) |
0xB000502D | Pin chunk container completed %1 |
Pin chunk container completed %1 |
0xB000502E | Pin chunk (ContainerId %1, Generation %2) |
Pin chunk (ContainerId %1, Generation %2) |
0xB000502F | Pin chunk completed %1 |
Pin chunk completed %1 |
0xB0005030 | Allocate pool buffer (ReadLength %1, PagingIo %2) |
Allocate pool buffer (ReadLength %1, PagingIo %2) |
0xB0005031 | Allocate pool buffer completed %1 |
Allocate pool buffer completed %1 |
0xB0005032 | Unpin chunk container (ContainerId %1, Generation %2) |
Unpin chunk container (ContainerId %1, Generation %2) |
0xB0005033 | Unpin chunk container completed %1 |
Unpin chunk container completed %1 |
0xB0005034 | Unpin chunk (ContainerId %1, Generation %2) |
Unpin chunk (ContainerId %1, Generation %2) |
0xB0005035 | Unpin chunk completed %1 |
Unpin chunk completed %1 |
0xB0006028 | Dedup read processing (FileObject %1, Offset %2, Length %3, IoType %4) |
Dedup read processing (FileObject %1, Offset %2, Length %3, IoType %4) |
0xB0006029 | Dedup read processing completed %1 |
Dedup read processing completed %1 |
0xB000602A | Get first stream map entry (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4) |
Get first stream map entry (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4) |
0xB000602B | Get first stream map entry completed %1 |
Get first stream map entry completed %1 |
0xB000602C | Read chunk metadata (Stream %1, CurrentOffset %2, AdjustedFinalOffset %3, FirstChunkByteOffset %4, ChunkRequestsEndOffset %5, TlCache %6) |
Read chunk metadata (Stream %1, CurrentOffset %2, AdjustedFinalOffset %3, FirstChunkByteOffset %4, ChunkRequestsEndOffset %5, TlCache %6) |
0xB000602D | Read chunk metadata completed %1 |
Read chunk metadata completed %1 |
0xB000602E | Read chunk data (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4) |
Read chunk data (TlCache %1, Stream %2, RequestStartOffset %3, RequestEndOffset %4) |
0xB000602F | Read chunk data completed %1 |
Read chunk data completed %1 |
0xB0006030 | Reference TlCache data (TlCache %1, Stream %2) |
Reference TlCache data (TlCache %1, Stream %2) |
0xB0006031 | Reference TlCache data completed %1 |
Reference TlCache data completed %1 |
0xB0006032 | Read chunk data from stream store (Stream %1) |
Read chunk data from stream store (Stream %1) |
0xB0006033 | Read chunk data from stream store completed %1 |
Read chunk data from stream store completed %1 |
0xB0006035 | Assemble chunk data completed %1 |
Assemble chunk data completed %1 |
0xB0006037 | Decompress chunk data completed %1 |
Decompress chunk data completed %1 |
0xB0006038 | Copy chunk data in to user buffer (BytesCopied %1) |
Copy chunk data in to user buffer (BytesCopied %1) |
0xB0006039 | Copy chunk data in to user buffer completed %1 |
Copy chunk data in to user buffer completed %1 |
0xB000603B | Insert chunk data in to tlcache completed %1 |
Insert chunk data in to tlcache completed %1 |
0xB000603C | Read data from dedup reparse point file (FileObject %1, Offset %2, Length %3) |
Read data from dedup reparse point file (FileObject %1, Offset %2, Length %3) |
0xB000603E | Prepare stream map (StreamContext %1) |
Prepare stream map (StreamContext %1) |
0xB000603F | Prepare stream map completed %1 |
Prepare stream map completed %1 |
0xB0006040 | Patch clean ranges (FileObject %1, Offset %2, Length %3) |
Patch clean ranges (FileObject %1, Offset %2, Length %3) |
0xB0006041 | Patch clean ranges completed %1 |
Patch clean ranges completed %1 |
0xB0006042 | Writing data to dedup file (FileObject %1, Offset %2, Length %3, IoType %4) |
Writing data to dedup file (FileObject %1, Offset %2, Length %3, IoType %4) |
0xB0006043 | Writing data to dedup file completed %1 |
Writing data to dedup file completed %1 |
0xB0006044 | Queue write request on dedup file (FileObject %1, Offset %2, Length %3) |
Queue write request on dedup file (FileObject %1, Offset %2, Length %3) |
0xB0006045 | Queue write request on dedup file completed %1 |
Queue write request on dedup file completed %1 |
0xB0006046 | Do copy on write work on dedup file (FileObject %1, Offset %2, Length %3) |
Do copy on write work on dedup file (FileObject %1, Offset %2, Length %3) |
0xB0006047 | Do copy on write work on dedup file completed %1 |
Do copy on write work on dedup file completed %1 |
0xB0006048 | Do full recall on dedup file (FileObject %1, Offset %2, Length %3) |
Do full recall on dedup file (FileObject %1, Offset %2, Length %3) |
0xB0006049 | Do full recall on dedup file completed %1 |
Do full recall on dedup file completed %1 |
0xB000604A | Do partial recall on dedup file (FileObject %1, Offset %2, Length %3) |
Do partial recall on dedup file (FileObject %1, Offset %2, Length %3) |
0xB000604B | Do partial recall on dedup file completed %1 |
Do partial recall on dedup file completed %1 |
0xB000604C | Do dummy paging read on dedup file (FileObject %1, Offset %2, Length %3) |
Do dummy paging read on dedup file (FileObject %1, Offset %2, Length %3) |
0xB000604D | Do dummy paging read on dedup file completed %1 |
Do dummy paging read on dedup file completed %1 |
0xB000604E | Read clean data for recalling file (FileObject %1, Offset %2, Length %3) |
Read clean data for recalling file (FileObject %1, Offset %2, Length %3) |
0xB000604F | Read clean data for recalling file completed %1 |
Read clean data for recalling file completed %1 |
0xB0006050 | Write clean data to dedup file normally (FileObject %1, Offset %2, Length %3) |
Write clean data to dedup file normally (FileObject %1, Offset %2, Length %3) |
0xB0006051 | Write clean data to dedup file completed %1 |
Write clean data to dedup file completed %1 |
0xB0006052 | Write clean data to dedup file paged (FileObject %1, Offset %2, Length %3) |
Write clean data to dedup file paged (FileObject %1, Offset %2, Length %3) |
0xB0006053 | Write clean data to dedup file paged completed %1 |
Write clean data to dedup file paged completed %1 |
0xB0006054 | Recall dedup file using paging Io (FileObject %1, Offset %2, Length %3) |
Recall dedup file using paging Io (FileObject %1, Offset %2, Length %3) |
0xB0006055 | Recall dedup file using paging Io completed %1 |
Recall dedup file using paging Io completed %1 |
0xB0006056 | Flush dedup file after recall (FileObject %1) |
Flush dedup file after recall (FileObject %1) |
0xB0006057 | Flush dedup file after recall completed %1 |
Flush dedup file after recall completed %1 |
0xB0006058 | Update bitmap after recall on dedup file (FileObject %1, Offset %2, Length %3) |
Update bitmap after recall on dedup file (FileObject %1, Offset %2, Length %3) |
0xB0006059 | Update bitmap after recall on dedup file completed %1 |
Update bitmap after recall on dedup file completed %1 |
0xB000605A | Delete dedup reparse point (FileObject %1) |
Delete dedup reparse point (FileObject %1) |
0xB000605B | Delete dedup reparse point completed %1 |
Delete dedup reparse point completed %1 |
0xB000605C | Open dedup file (FilePath %1) |
Open dedup file (FilePath %1) |
0xB000605D | Open dedup file completed %1 |
Open dedup file completed %1 |
0xB000605F | Locking user buffer for read completed %1 |
Locking user buffer for read completed %1 |
0xB0006061 | Get system address for MDL completed %1 |
Get system address for MDL completed %1 |
0xB0006062 | Read clean dedup file (FileObject %1, Offset %2, Length %3) |
Read clean dedup file (FileObject %1, Offset %2, Length %3) |
0xB0006063 | Read clean dedup file completed %1 |
Read clean dedup file completed %1 |
0xB0006064 | Get range state (Offset %1, Length %2) |
Get range state (Offset %1, Length %2) |
0xB0006065 | Get range state completed %1 |
Get range state completed %1 |
0xB0006067 | Get chunk body completed %1 |
Get chunk body completed %1 |
0xB0006069 | Release chunk completed %1 |
Release chunk completed %1 |
0xB000606A | Release decompress chunk context (BufferSize %1) |
Release decompress chunk context (BufferSize %1) |
0xB000606B | Release decompress chunk context completed %1 |
Release decompress chunk context completed %1 |
0xB000606C | Prepare decompress chunk context (BufferSize %1) |
Prepare decompress chunk context (BufferSize %1) |
0xB000606D | Prepare decompress chunk context completed %1 |
Prepare decompress chunk context completed %1 |
0xB000606E | Copy data to compressed buffer (BufferSize %1) |
Copy data to compressed buffer (BufferSize %1) |
0xB000606F | Copy data to compressed buffer completed %1 |
Copy data to compressed buffer completed %1 |
0xB0006071 | Release data from TL Cache completed %1 |
Release data from TL Cache completed %1 |
0xB0006072 | Queue async read request (FileObject %1, Offset %2, Length %3) |
Queue async read request (FileObject %1, Offset %2, Length %3) |
0xB0006073 | Queue async read request complete %1 |
Queue async read request complete %1 |
0xB0015004 | Read stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4) |
Read stream map (Stream %1, FileId %2, StartIndex %3, EntryCount %4) |
0xB1004000 | Create chunk container (%1 - %2.%3.ccc) |
Create chunk container (%1 - %2.%3.ccc) |
0xB1004001 | Create chunk container completed %1 |
Create chunk container completed %1 |
0xB1004002 | Copy chunk container (%1 - %2.%3.ccc) |
Copy chunk container (%1 - %2.%3.ccc) |
0xB1004003 | Copy chunk container completed %1 |
Copy chunk container completed %1 |
0xB1004004 | Delete chunk container (%1 - %2.%3.ccc) |
Delete chunk container (%1 - %2.%3.ccc) |
0xB1004005 | Delete chunk container completed %1 |
Delete chunk container completed %1 |
0xB1004006 | Rename chunk container (%1 - %2.%3.ccc%4) |
Rename chunk container (%1 - %2.%3.ccc%4) |
0xB1004007 | Rename chunk container completed %1 |
Rename chunk container completed %1 |
0xB1004008 | Flush chunk container (%1 - %2.%3.ccc) |
Flush chunk container (%1 - %2.%3.ccc) |
0xB1004009 | Flush chunk container completed %1 |
Flush chunk container completed %1 |
0xB100400A | Rollback chunk container (%1 - %2.%3.ccc) |
Rollback chunk container (%1 - %2.%3.ccc) |
0xB100400B | Rollback chunk container completed %1 |
Rollback chunk container completed %1 |
0xB100400C | Mark chunk container (%1 - %2.%3.ccc) read-only |
Mark chunk container (%1 - %2.%3.ccc) read-only |
0xB100400D | Mark chunk container read-only completed %1 |
Mark chunk container read-only completed %1 |
0xB100400E | Write chunk container (%1 - %2.%3.ccc) redirection table at offset %4 (Entries: StartIndex %5, Count %6) |
Write chunk container (%1 - %2.%3.ccc) redirection table at offset %4 (Entries: StartIndex %5, Count %6) |
0xB100400F | Write chunk container redirection table completed %1 |
Write chunk container redirection table completed %1 |
0xB1004011 | Write chunk container header completed %1 |
Write chunk container header completed %1 |
0xB1004013 | Insert data chunk header completed %1 |
Insert data chunk header completed %1 |
0xB1004015 | Insert data chunk body completed %1 with ChunkId %2 |
Insert data chunk body completed %1 with ChunkId %2 |
0xB1004019 | Write delete log header completed %1 |
Write delete log header completed %1 |
0xB100401B | Append delete log entries completed %1 |
Append delete log entries completed %1 |
0xB100401D | Delete delete log completed %1 |
Delete delete log completed %1 |
0xB100401F | Rename delete log completed %1 |
Rename delete log completed %1 |
0xB1004021 | Write chunk container bitmap completed %1 |
Write chunk container bitmap completed %1 |
0xB1004023 | Delete chunk container bitmap completed %1 |
Delete chunk container bitmap completed %1 |
0xB1004024 | Write merge log (%5 - %6.%7.merge.log) header |
Write merge log (%5 - %6.%7.merge.log) header |
0xB1004025 | Write merge log header completed %1 |
Write merge log header completed %1 |
0xB1004027 | Insert hotspot chunk header completed %1 |
Insert hotspot chunk header completed %1 |
0xB1004029 | Insert hotspot chunk body completed %1 with ChunkId %2 |
Insert hotspot chunk body completed %1 with ChunkId %2 |
0xB100402B | Insert stream map chunk header completed %1 |
Insert stream map chunk header completed %1 |
0xB100402D | Insert stream map chunk body completed %1 with ChunkId %2 |
Insert stream map chunk body completed %1 with ChunkId %2 |
0xB100402F | Append merge log entries completed %1 |
Append merge log entries completed %1 |
0xB1004030 | Delete merge log (%1 - %2.%3.merge.log) |
Delete merge log (%1 - %2.%3.merge.log) |
0xB1004031 | Delete merge log completed %1 |
Delete merge log completed %1 |
0xB1004032 | Flush merge log (%1 - %2.%3.merge.log) |
Flush merge log (%1 - %2.%3.merge.log) |
0xB1004033 | Flush merge log completed %1 |
Flush merge log completed %1 |
0xB1004034 | Update file list entries (Remove: %1, Add: %2) |
Update file list entries (Remove: %1, Add: %2) |
0xB1004035 | Update file list entries completed %1 |
Update file list entries completed %1 |
0xB1004036 | Set dedup reparse point on %2 (FileId %1) (ReparsePoint: SizeBackedByChunkStore %3, StreamMapInfoSize %4, StreamMapInfo %5) |
Set dedup reparse point on %2 (FileId %1) (ReparsePoint: SizeBackedByChunkStore %3, StreamMapInfoSize %4, StreamMapInfo %5) |
0xB1004037 | Set dedup reparse point completed %1 (%2) |
Set dedup reparse point completed %1 (%2) |
0xB1004038 | Set dedup zero data on %2 (FileId %1) |
Set dedup zero data on %2 (FileId %1) |
0xB1004039 | Set dedup zero data completed %1 |
Set dedup zero data completed %1 |
0xB100403A | Flush reparse point files |
Flush reparse point files |
0xB100403B | Flush reparse point files completed %1 |
Flush reparse point files completed %1 |
0xB100403C | Set sparse on file id %1 |
Set sparse on file id %1 |
0xB100403D | Set sparse completed %1 |
Set sparse completed %1 |
0xB100403E | FSCTL_SET_ZERO_DATA on file id %1 at offset %2 and BeyondFinalZero %3 |
FSCTL_SET_ZERO_DATA on file id %1 at offset %2 and BeyondFinalZero %3 |
0xB100403F | FSCTL_SET_ZERO_DATA completed %1 |
FSCTL_SET_ZERO_DATA completed %1 |
0xB1004040 | Rename chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) |
Rename chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) |
0xB1004041 | Rename chunk container bitmap completed %1 |
Rename chunk container bitmap completed %1 |
0xB1004042 | Insert padding chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7) |
Insert padding chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7) |
0xB1004043 | Insert padding chunk header completed %1 |
Insert padding chunk header completed %1 |
0xB1004044 | Insert padding chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
Insert padding chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
0xB1004045 | Insert padding chunk body completed %1 with ChunkId %2 |
Insert padding chunk body completed %1 with ChunkId %2 |
0xB1004046 | Insert batch of chunks to chunk container (%1 - %2.%3.ccc) at offset %4 (BatchChunkCount %5, BatchDataSize %6) |
Insert batch of chunks to chunk container (%1 - %2.%3.ccc) at offset %4 (BatchChunkCount %5, BatchDataSize %6) |
0xB1004047 | Insert batch of chunks completed %1 |
Insert batch of chunks completed %1 |
0xB1004049 | Write chunk container directory completed %1 |
Write chunk container directory completed %1 |
0xB100404B | Delete chunk container directory completed %1 |
Delete chunk container directory completed %1 |
0xB100404C | Rename chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) |
Rename chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) |
0xB100404D | Rename chunk container directory completed %1 |
Rename chunk container directory completed %1 |
0xB1014010 | Write chunk container (%5 - %6.%7.ccc) header at offset %8 (Header: USN %9, VDL %10, #Chunk %11, NextLocalId %12, Flags %13, LastAppendTime %14, BackupRedirectionTableOfset %15, LastReconciliationLocalId %16) |
Write chunk container (%5 - %6.%7.ccc) header at offset %8 (Header: USN %9, VDL %10, #Chunk %11, NextLocalId %12, Flags %13, LastAppendTime %14, BackupRedirectionTableOfset %15, LastReconciliationLocalId %16) |
0xB1014012 | Insert data chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7) |
Insert data chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorupted %6, DataSize %7) |
0xB1014014 | Insert data chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
Insert data chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
0xB1014018 | Write delete log (%5 - %6.%7.delete.log) header |
Write delete log (%5 - %6.%7.delete.log) header |
0xB101401A | Append delete log (%1 - %2.%3.delete.log) entries at offset %4 (Entries: StartIndex %5, Count %6) |
Append delete log (%1 - %2.%3.delete.log) entries at offset %4 (Entries: StartIndex %5, Count %6) |
0xB101401C | Delete delete log (%1 - %2.%3.delete.log) |
Delete delete log (%1 - %2.%3.delete.log) |
0xB101401E | Rename delete log (%1 - %2.%3.delete.log) |
Rename delete log (%1 - %2.%3.delete.log) |
0xB1014020 | Write chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) (Bitmap: BitLength %5, StartIndex %6, Count %7) |
Write chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) (Bitmap: BitLength %5, StartIndex %6, Count %7) |
0xB1014022 | Delete chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) |
Delete chunk container bitmap (%1 - %2) for chunk container (%1 - %3.%4) |
0xB1014026 | Insert hotspot chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
Insert hotspot chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
0xB1014028 | Insert hotspot chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
Insert hotspot chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
0xB101402A | Insert stream map chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
Insert stream map chunk header to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) |
0xB1014048 | Write chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) (Directory: EntryCount %5) |
Write chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) (Directory: EntryCount %5) |
0xB101404A | Delete chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) |
Delete chunk container directory (%1 - %2) for chunk container (%1 - %3.%4) |
0xB102402E | Append merge log (%1 - %2.%3.merge.log) entries at offset %4 (Entries: StartIndex %5, Count %6) |
Append merge log (%1 - %2.%3.merge.log) entries at offset %4 (Entries: StartIndex %5, Count %6) |
0xB103402C | Insert stream map chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) (Entries: StartIndex %8, Count %9) |
Insert stream map chunk body to chunk container (%1 - %2.%3.ccc) at offset %4 (Chunk: IsBatched %5 IsCorrupted %6, DataSize %7) (Entries: StartIndex %8, Count %9) |
0xD0000001 | Chunk header |
Chunk header |
0xD0000002 | Chunk body |
Chunk body |
0xD0000003 | Container header |
Container header |
0xD0000004 | Container redirection table |
Container redirection table |
0xD0000005 | Hotspot table |
Hotspot table |
0xD0000006 | Delete log header |
Delete log header |
0xD0000007 | Delete log entry |
Delete log entry |
0xD0000008 | GC bitmap header |
GC bitmap header |
0xD0000009 | GC bitmap entry |
GC bitmap entry |
0xD000000A | Merge log header |
Merge log header |
0xD000000B | Merge log entry |
Merge log entry |
0xD000000C | Data |
Data |
0xD000000E | Hotspot |
Hotspot |
0xD000000F | Optimization |
Optimization |
0xD0000010 | Garbage Collection |
Garbage Collection |
0xD0000011 | Scrubbing |
Scrubbing |
0xD0000012 | Unoptimization |
Unoptimization |
0xD0000013 | Analysis |
Analysis |
0xD0000014 | Low |
Low |
0xD0000015 | Normal |
Normal |
0xD0000016 | High |
High |
0xD0000017 | Cache |
Cache |
0xD0000018 | Non-cache |
Non-cache |
0xD0000019 | Paging |
Paging |
0xD000001A | Memory map |
Memory map |
0xD000001B | Paging memory map |
Paging memory map |
0xD000001C | None |
None |
0xD000001D | Pool |
Pool |
0xD000001E | PoolAligned |
PoolAligned |
0xD000001F | MDL |
MDL |
0xD0000020 | Map |
Map |
0xD0000021 | Cached |
Cached |
0xD0000022 | NonCached |
NonCached |
0xD0000023 | Paged |
Paged |
0xD0000024 | container file |
container file |
0xD0000025 | file list file |
file list file |
0xD0000026 | file list header |
file list header |
0xD0000027 | file list entry |
file list entry |
0xD0000028 | primary file list file |
primary file list file |
0xD0000029 | backup file list file |
backup file list file |
0xD000002A | Scheduled |
Scheduled |
0xD000002B | Manual |
Manual |
0xD000002C | recall bitmap header |
recall bitmap header |
0xD000002D | recall bitmap body |
recall bitmap body |
0xD000002E | recall bitmap missing |
recall bitmap missing |
0xD000002F | Recall bitmap |
Recall bitmap |
0xD0000030 | Unknown |
Unknown |
0xD0000031 | The pipeline handle was closed |
The pipeline handle was closed |
0xD0000032 | The file was deleted |
The file was deleted |
0xD0000033 | The file was overwritten |
The file was overwritten |
0xD0000034 | The file was recalled |
The file was recalled |
0xD0000035 | A transaction was started on the file |
A transaction was started on the file |
0xD0000036 | The file was encrypted |
The file was encrypted |
0xD0000037 | The file was compressed |
The file was compressed |
0xD0000038 | Set Zero Data was called on the file |
Set Zero Data was called on the file |
0xD0000039 | Extended Attributes were set on the file |
Extended Attributes were set on the file |
0xD000003A | A section was created on the file |
A section was created on the file |
0xD000003B | The file was shrunk |
The file was shrunk |
0xD000003C | A long-running IO operation prevented optimization |
A long-running IO operation prevented optimization |
0xD000003D | An IO operation failed |
An IO operation failed |
0xD000003E | Notifying Optimization |
Notifying Optimization |
0xD000003F | Setting the Reparse Point |
Setting the Reparse Point |
0xD0000040 | Truncating the file |
Truncating the file |
0xD0000041 | DataPort |
DataPort |
0xD1000002 | LZNT1 |
LZNT1 |
0xD1000003 | Xpress |
Xpress |
0xD1000004 | Xpreff Huff |
Xpreff Huff |
0xD1000006 | Standard |
Standard |
0xD1000007 | Max |
Max |
0xD1000008 | Hybrid |
Hybrid |
0xF0000002 | Bad checksum |
Bad checksum |
0xF0000003 | Inconsistent metadata |
Inconsistent metadata |
0xF0000004 | Invalid header metadata |
Invalid header metadata |
0xF0000005 | Missing file |
Missing file |
0xF0000006 | Bad checksum (storage subsystem) |
Bad checksum (storage subsystem) |
0xF0000007 | Corruption (storage subsystem) |
Corruption (storage subsystem) |
0xF0000008 | Corruption (missing metadata) |
Corruption (missing metadata) |
0xF0000009 | Possible data loss (duplicate reparse data) |
Possible data loss (duplicate reparse data) |