diff --git a/patches/server/0053-Asynchronous-chunk-IO-and-loading.patch b/patches/server/0053-Asynchronous-chunk-IO-and-loading.patch
new file mode 100644
index 000000000..2fade60fa
--- /dev/null
+++ b/patches/server/0053-Asynchronous-chunk-IO-and-loading.patch
@@ -0,0 +1,3687 @@
+From 6e5123eaf84800e4f27e2c306e915903de098241 Mon Sep 17 00:00:00 2001
+From: Spottedleaf
+Date: Sat, 13 Jul 2019 09:23:10 -0700
+Subject: [PATCH] Asynchronous chunk IO and loading
+
+This patch re-adds a file IO thread as well as shoving de-serializing
+chunk NBT data onto worker threads. This patch also will shove
+chunk data serialization onto the same worker threads when the chunk
+is unloaded - this cannot be done for regular saves since that's unsafe.
+
+The file IO Thread
+
+Unlike 1.13 and below, the file IO thread is prioritized - IO tasks can
+be reoredered, however they are "stuck" to a world & coordinate.
+
+Scheduling IO tasks works as follows, given a world & coordinate - location:
+
+The IO thread has been designed to ensure that reads and writes appear to
+occur synchronously for a given location, however the implementation also
+has the unfortunate side-effect of making every write appear as if
+they occur without failure.
+
+The IO thread has also been designed to accomodate Mojang's decision to
+store chunk data and POI data separately. It can independently schedule
+tasks for each.
+
+However threads can wait for writes to complete and check if:
+ - The write was overwriten by another scheduler
+ - The write failed (however it does not indicate whether it was overwritten by another scheduler)
+
+Scheduling reads:
+
+ - If a write task is in progress, the task is not scheduled and returns the in-progress write data
+ This means that readers cannot modify the NBTTagCompound returned and must clone if it they wish to write
+ - If a write task is not in progress but a read task is in progress, then the read task is simply chained
+ This means that again, readers cannot modify the NBTTagCompound returned
+
+Scheduling writes:
+
+ - If a read task is in progress, ignore the read task and schedule the write
+ We cannot complete the read task since we assume it wants old data - not current
+ - If a write task is pending, overwrite the write data
+ The file IO thread does correctly handle cases where the data is overwritten when it
+ is writing data (before completing a task it will check if the data was overwritten and
+ will retry).
+
+When the file IO thread executes a task for a location, the it will
+execute the read task first (if it exists), then it will execute the
+write task. This ensures that, even when scheduling at different
+priorities, that reads/writes for a location act synchronously.
+
+The downside of the file IO thread is that write failure can only be
+indicated to the scheduling thread if:
+
+- No other thread decides to schedule another write for the location
+concurrently
+- The scheduling thread blocks on the write to complete (however the
+current implementation can be modified to indicate success
+asynchronously)
+
+The file io thread can be modified easily to provide indications
+of write failure and write overwriting if needed.
+
+The upside of the file IO thread is that if a write failures, then
+chunk data is not lost until server restart. This leaves more room
+for spurious failure.
+
+Finally, the io thread will indicate to the console when reads
+or writes fail - with relevant detail.
+
+Asynchronous chunk data serialization for unloading chunks
+
+When chunks unload they make a call to PlayerChunkMap#saveChunk(IChunkAccess).
+Even if I make the IO asynchronous for this call, the data serialization
+still hits pretty hard. And given that now the chunk system will
+aggressively unload chunks more often (queued immediately at
+ticket level 45 or higher), unloads occur more often, and
+combined with our changes to the unload queue to make it
+significantly more aggresive - chunk unloads can hit pretty hard.
+Especially players running around with elytras and fireworks.
+
+For serializing chunk data off main, there are some tasks which cannot be
+done asynchronously. Lighting data must be saved beforehand as well as
+potentially some tick lists. These are completed before scheduling the
+asynchronous save.
+
+However serializing chunk data off of the main thread is still risky.
+Even though this patch schedules the save to occur after ALL references
+of the chunk are removed from the world, plugins can still technically
+access entities inside the chunks. For this, if the serialization task
+fails for any reason, it will be re-scheduled to be serialized on the
+main thread - with the hopes that the reason it failed was due to a plugin
+and not an error with the save code itself. Like vanilla code - if the
+serialization fails, the chunk data is lost.
+
+Asynchronous chunk io/loading
+
+Mojang's current implementation for loading chunk data off disk is
+to return a CompletableFuture that will be completed by scheduling a
+task to be executed on the world's chunk queue (which is only drained
+on the main thread). This task will read the IO off disk and it will
+apply data conversions & deserialization synchronously. Obviously
+all 3 of these operations are expensive however all can be completed
+asynchronously instead.
+
+The solution this patch uses is as follows:
+
+0. If an asynchronous chunk save is in progress (see above), wait
+for that task to complete. It will use the serialized NBTTagCompound
+created by the task. If the task fails to complete, then we would continue
+with step 1. If it does not, we skip step 1. (Note: We actually load
+POI data no matter what in this case).
+1. Schedule an IO task to read chunk & poi data off disk.
+2. The IO task will schedule a chunk load task.
+3. The chunk load task executes on the async chunk loader threads
+and will apply datafixers & de-serialize the chunk into a ProtoChunk
+or ProtoChunkExtension.
+4. The in progress chunk is then passed on to the world's chunk queue
+to complete the ComletableFuture and execute any of the synchronous
+tasks required to be executed by the chunk load task (i.e lighting
+and some poi tasks).
+---
+ .../co/aikar/timings/WorldTimingsHandler.java | 22 +
+ .../com/destroystokyo/paper/PaperConfig.java | 61 ++
+ .../ChunkPacketBlockControllerAntiXray.java | 46 +-
+ .../paper/io/ConcreteFileIOThread.java | 664 ++++++++++++++++++
+ .../com/destroystokyo/paper/io/IOUtil.java | 62 ++
+ .../paper/io/PrioritizedTaskQueue.java | 258 +++++++
+ .../paper/io/QueueExecutorThread.java | 244 +++++++
+ .../paper/io/chunk/ChunkLoadTask.java | 120 ++++
+ .../paper/io/chunk/ChunkSaveTask.java | 114 +++
+ .../paper/io/chunk/ChunkTask.java | 40 ++
+ .../paper/io/chunk/ChunkTaskManager.java | 303 ++++++++
+ .../minecraft/server/ChunkProviderServer.java | 135 ++++
+ .../minecraft/server/ChunkRegionLoader.java | 157 ++++-
+ .../net/minecraft/server/ChunkStatus.java | 1 +
+ .../minecraft/server/IAsyncTaskHandler.java | 2 +-
+ .../net/minecraft/server/IChunkLoader.java | 29 +-
+ .../java/net/minecraft/server/MCUtil.java | 5 +
+ .../net/minecraft/server/MinecraftServer.java | 1 +
+ .../net/minecraft/server/NibbleArray.java | 1 +
+ .../net/minecraft/server/PlayerChunkMap.java | 302 +++++++-
+ .../java/net/minecraft/server/RegionFile.java | 2 +-
+ .../net/minecraft/server/RegionFileCache.java | 6 +-
+ .../minecraft/server/RegionFileSection.java | 56 +-
+ .../java/net/minecraft/server/TicketType.java | 1 +
+ .../net/minecraft/server/VillagePlace.java | 66 +-
+ .../net/minecraft/server/WorldServer.java | 77 +-
+ .../org/bukkit/craftbukkit/CraftWorld.java | 36 +-
+ 27 files changed, 2722 insertions(+), 89 deletions(-)
+ create mode 100644 src/main/java/com/destroystokyo/paper/io/ConcreteFileIOThread.java
+ create mode 100644 src/main/java/com/destroystokyo/paper/io/IOUtil.java
+ create mode 100644 src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java
+ create mode 100644 src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java
+ create mode 100644 src/main/java/com/destroystokyo/paper/io/chunk/ChunkLoadTask.java
+ create mode 100644 src/main/java/com/destroystokyo/paper/io/chunk/ChunkSaveTask.java
+ create mode 100644 src/main/java/com/destroystokyo/paper/io/chunk/ChunkTask.java
+ create mode 100644 src/main/java/com/destroystokyo/paper/io/chunk/ChunkTaskManager.java
+
+diff --git a/src/main/java/co/aikar/timings/WorldTimingsHandler.java b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
+index 366de66657..2b8f064ba3 100644
+--- a/src/main/java/co/aikar/timings/WorldTimingsHandler.java
++++ b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
+@@ -52,6 +52,17 @@ public class WorldTimingsHandler {
+ public final Timing worldSaveLevel;
+ public final Timing chunkSaveData;
+
++ public final Timing poiUnload;
++ public final Timing chunkUnload;
++ public final Timing poiSaveDataSerialization;
++ public final Timing chunkSave;
++ public final Timing chunkSaveOverwriteCheck;
++ public final Timing chunkSaveDataSerialization;
++ public final Timing chunkSaveIOWait;
++ public final Timing chunkUnloadPrepareSave;
++ public final Timing chunkUnloadPOISerialization;
++ public final Timing chunkUnloadDataSave;
++
+ public WorldTimingsHandler(World server) {
+ String name = server.worldData.getName() +" - ";
+
+@@ -99,6 +110,17 @@ public class WorldTimingsHandler {
+ tracker2 = Timings.ofSafe(name + "tracker stage 2");
+ doTick = Timings.ofSafe(name + "doTick");
+ tickEntities = Timings.ofSafe(name + "tickEntities");
++
++ poiUnload = Timings.ofSafe(name + "Chunk unload - POI");
++ chunkUnload = Timings.ofSafe(name + "Chunk unload - Chunk");
++ poiSaveDataSerialization = Timings.ofSafe(name + "Chunk save - POI Data serialization");
++ chunkSave = Timings.ofSafe(name + "Chunk save - Chunk");
++ chunkSaveOverwriteCheck = Timings.ofSafe(name + "Chunk save - Chunk Overwrite Check");
++ chunkSaveDataSerialization = Timings.ofSafe(name + "Chunk save - Chunk Data serialization");
++ chunkSaveIOWait = Timings.ofSafe(name + "Chunk save - Chunk IO Wait");
++ chunkUnloadPrepareSave = Timings.ofSafe(name + "Chunk unload - Async Save Prepare");
++ chunkUnloadPOISerialization = Timings.ofSafe(name + "Chunk unload - POI Data Serialization");
++ chunkUnloadDataSave = Timings.ofSafe(name + "Chunk unload - Data Serialization");
+ }
+
+ public static Timing getTickList(WorldServer worldserver, String timingsType) {
+diff --git a/src/main/java/com/destroystokyo/paper/PaperConfig.java b/src/main/java/com/destroystokyo/paper/PaperConfig.java
+index 750ca9727a..58e2a07079 100644
+--- a/src/main/java/com/destroystokyo/paper/PaperConfig.java
++++ b/src/main/java/com/destroystokyo/paper/PaperConfig.java
+@@ -1,5 +1,6 @@
+ package com.destroystokyo.paper;
+
++import com.destroystokyo.paper.io.chunk.ChunkTaskManager;
+ import com.google.common.base.Strings;
+ import com.google.common.base.Throwables;
+
+@@ -390,4 +391,64 @@ public class PaperConfig {
+ maxBookPageSize = getInt("settings.book-size.page-max", maxBookPageSize);
+ maxBookTotalSizeMultiplier = getDouble("settings.book-size.total-multiplier", maxBookTotalSizeMultiplier);
+ }
++
++ public static boolean asyncChunks = false;
++ //public static boolean asyncChunkGeneration = true; // Leave out for now until we can control this
++ //public static boolean asyncChunkGenThreadPerWorld = true; // Leave out for now until we can control this
++ public static int asyncChunkLoadThreads = -1;
++ private static void asyncChunks() {
++ if (version < 15) {
++ boolean enabled = config.getBoolean("settings.async-chunks", true);
++ ConfigurationSection section = config.createSection("settings.async-chunks");
++ section.set("enable", enabled);
++ section.set("load-threads", -1);
++ section.set("generation", true);
++ section.set("thread-per-world-generation", true);
++ }
++
++ // TODO load threads now control async chunk save for unloading chunks, look into renaming this?
++
++ asyncChunks = getBoolean("settings.async-chunks.enable", true);
++ //asyncChunkGeneration = getBoolean("settings.async-chunks.generation", true); // Leave out for now until we can control this
++ //asyncChunkGenThreadPerWorld = getBoolean("settings.async-chunks.thread-per-world-generation", true); // Leave out for now until we can control this
++ asyncChunkLoadThreads = getInt("settings.async-chunks.load-threads", -1);
++ if (asyncChunkLoadThreads <= 0) {
++ asyncChunkLoadThreads = (int) Math.min(Integer.getInteger("paper.maxChunkThreads", 8), Math.max(1, Runtime.getRuntime().availableProcessors() - 1));
++ }
++
++ // Let Shared Host set some limits
++ String sharedHostEnvGen = System.getenv("PAPER_ASYNC_CHUNKS_SHARED_HOST_GEN");
++ String sharedHostEnvLoad = System.getenv("PAPER_ASYNC_CHUNKS_SHARED_HOST_LOAD");
++ /* Ignore temporarily - we cannot control the gen threads (for now)
++ if ("1".equals(sharedHostEnvGen)) {
++ log("Async Chunks - Generation: Your host has requested to use a single thread world generation");
++ asyncChunkGenThreadPerWorld = false;
++ } else if ("2".equals(sharedHostEnvGen)) {
++ log("Async Chunks - Generation: Your host has disabled async world generation - You will experience lag from world generation");
++ asyncChunkGeneration = false;
++ }
++ */
++
++ if (sharedHostEnvLoad != null) {
++ try {
++ asyncChunkLoadThreads = Math.max(1, Math.min(asyncChunkLoadThreads, Integer.parseInt(sharedHostEnvLoad)));
++ } catch (NumberFormatException ignored) {}
++ }
++
++ if (!asyncChunks) {
++ log("Async Chunks: Disabled - Chunks will be managed synchronosuly, and will cause tremendous lag.");
++ } else {
++ ChunkTaskManager.initGlobalLoadThreads(asyncChunkLoadThreads);
++ log("Async Chunks: Enabled - Chunks will be loaded much faster, without lag.");
++ /* Ignore temporarily - we cannot control the gen threads (for now)
++ if (!asyncChunkGeneration) {
++ log("Async Chunks - Generation: Disabled - Chunks will be generated synchronosuly, and will cause tremendous lag.");
++ } else if (asyncChunkGenThreadPerWorld) {
++ log("Async Chunks - Generation: Enabled - Chunks will be generated much faster, without lag.");
++ } else {
++ log("Async Chunks - Generation: Enabled (Single Thread) - Chunks will be generated much faster, without lag.");
++ }
++ */
++ }
++ }
+ }
+diff --git a/src/main/java/com/destroystokyo/paper/antixray/ChunkPacketBlockControllerAntiXray.java b/src/main/java/com/destroystokyo/paper/antixray/ChunkPacketBlockControllerAntiXray.java
+index 9d8bee5cac..dc59b6431b 100644
+--- a/src/main/java/com/destroystokyo/paper/antixray/ChunkPacketBlockControllerAntiXray.java
++++ b/src/main/java/com/destroystokyo/paper/antixray/ChunkPacketBlockControllerAntiXray.java
+@@ -9,6 +9,7 @@ import java.util.concurrent.Executors;
+ import java.util.concurrent.atomic.AtomicInteger;
+ import java.util.function.Supplier;
+
++import com.destroystokyo.paper.io.PrioritizedTaskQueue;
+ import net.minecraft.server.*;
+ import org.bukkit.Bukkit;
+ import org.bukkit.World.Environment;
+@@ -150,6 +151,12 @@ public class ChunkPacketBlockControllerAntiXray extends ChunkPacketBlockControll
+
+ private final AtomicInteger xrayRequests = new AtomicInteger();
+
++ // Paper start - async chunk api
++ private Integer nextTicketHold() {
++ return Integer.valueOf(this.xrayRequests.getAndIncrement());
++ }
++ // Paper end
++
+ private Integer addXrayTickets(final int x, final int z, final ChunkProviderServer chunkProvider) {
+ final Integer hold = Integer.valueOf(this.xrayRequests.getAndIncrement());
+
+@@ -181,6 +188,35 @@ public class ChunkPacketBlockControllerAntiXray extends ChunkPacketBlockControll
+ chunk.world.getChunkAt(locX, locZ + 1);
+ }
+
++ // Paper start - async chunk api
++ private void loadNeighbourAsync(ChunkProviderServer chunkProvider, WorldServer world, int chunkX, int chunkZ, int[] counter, java.util.function.Consumer onNeighourLoad, Runnable onAllNeighboursLoad) {
++ chunkProvider.getChunkAtAsynchronously(chunkX, chunkZ, true, (Chunk neighbour) -> {
++ onNeighourLoad.accept(neighbour);
++ if (++counter[0] == 4) {
++ onAllNeighboursLoad.run();
++ }
++ });
++ world.asyncChunkTaskManager.raisePriority(chunkX, chunkZ, PrioritizedTaskQueue.HIGHER_PRIORITY);
++ }
++
++ private void loadNeighboursAsync(Chunk chunk, java.util.function.Consumer onNeighourLoad, Runnable onAllNeighboursLoad) {
++ int[] loaded = new int[1];
++
++ int locX = chunk.getPos().x;
++ int locZ = chunk.getPos().z;
++ WorldServer world = ((WorldServer)chunk.world);
++
++ onNeighourLoad.accept(chunk);
++
++ ChunkProviderServer chunkProvider = world.getChunkProvider();
++
++ this.loadNeighbourAsync(chunkProvider, world, locX - 1, locZ, loaded, onNeighourLoad, onAllNeighboursLoad);
++ this.loadNeighbourAsync(chunkProvider, world, locX + 1, locZ, loaded, onNeighourLoad, onAllNeighboursLoad);
++ this.loadNeighbourAsync(chunkProvider, world, locX, locZ - 1, loaded, onNeighourLoad, onAllNeighboursLoad);
++ this.loadNeighbourAsync(chunkProvider, world, locX, locZ + 1, loaded, onNeighourLoad, onAllNeighboursLoad);
++ }
++ // Paper end
++
+ @Override
+ public boolean onChunkPacketCreate(Chunk chunk, int chunkSectionSelector, boolean force) {
+ int locX = chunk.getPos().x;
+@@ -256,11 +292,15 @@ public class ChunkPacketBlockControllerAntiXray extends ChunkPacketBlockControll
+
+ if (chunks[0] == null || chunks[1] == null || chunks[2] == null || chunks[3] == null) {
+ // we need to load
+- MinecraftServer.getServer().scheduleOnMain(() -> {
+- Integer ticketHold = this.addXrayTickets(locX, locZ, world.getChunkProvider());
+- this.loadNeighbours(chunk);
++ // Paper start - async chunk api
++ Integer ticketHold = this.nextTicketHold();
++ this.loadNeighboursAsync(chunk, (Chunk neighbour) -> { // when a neighbour is loaded
++ ((WorldServer)neighbour.world).getChunkProvider().addTicket(TicketType.ANTIXRAY, neighbour.getPos(), 0, ticketHold);
++ },
++ () -> { // once neighbours get loaded
+ this.modifyBlocks(packetPlayOutMapChunk, chunkPacketInfo, false, ticketHold);
+ });
++ // Paper end
+ return;
+ }
+
+diff --git a/src/main/java/com/destroystokyo/paper/io/ConcreteFileIOThread.java b/src/main/java/com/destroystokyo/paper/io/ConcreteFileIOThread.java
+new file mode 100644
+index 0000000000..19f4b89a98
+--- /dev/null
++++ b/src/main/java/com/destroystokyo/paper/io/ConcreteFileIOThread.java
+@@ -0,0 +1,664 @@
++package com.destroystokyo.paper.io;
++
++import net.minecraft.server.ChunkCoordIntPair;
++import net.minecraft.server.ExceptionWorldConflict;
++import net.minecraft.server.MinecraftServer;
++import net.minecraft.server.NBTTagCompound;
++import net.minecraft.server.RegionFile;
++import net.minecraft.server.WorldServer;
++import org.apache.logging.log4j.Logger;
++
++import java.io.IOException;
++import java.util.concurrent.CompletableFuture;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.concurrent.atomic.AtomicLong;
++import java.util.function.Consumer;
++import java.util.function.Function;
++
++/**
++ * Prioritized singleton thread responsible for all chunk IO that occurs in a minecraft server.
++ *
++ *
++ * Singleton access: {@link Holder#INSTANCE}
++ *
++ *
++ *
++ * All functions provided are MT-Safe, however certain ordering constraints are (but not enforced):
++ *
++ * Chunk saves may not occur for unloaded chunks.
++ *
++ *
++ * Tasks must be scheduled on the main thread.
++ *
++ *
++ *
++ * @see Holder#INSTANCE
++ * @see #scheduleSave(WorldServer, int, int, NBTTagCompound, NBTTagCompound, int)
++ * @see #loadChunkDataAsync(WorldServer, int, int, int, Consumer, boolean, boolean, boolean)
++ */
++public final class ConcreteFileIOThread extends QueueExecutorThread {
++
++ public static final Logger LOGGER = MinecraftServer.LOGGER;
++ public static final NBTTagCompound FAILURE_VALUE = new NBTTagCompound();
++
++ public static final class Holder {
++
++ public static final ConcreteFileIOThread INSTANCE = new ConcreteFileIOThread();
++
++ static {
++ INSTANCE.start();
++ }
++ }
++
++ private final AtomicLong writeCounter = new AtomicLong();
++
++ private ConcreteFileIOThread() {
++ super(new PrioritizedTaskQueue<>(), (int)(1.0e6)); // 1.0ms spinwait time
++ this.setName("Concrete RegionFile IO Thread");
++ this.setPriority(Thread.NORM_PRIORITY - 1); // we keep priority close to normal because threads can wait on us
++ this.setUncaughtExceptionHandler((final Thread unused, final Throwable thr) -> {
++ LOGGER.fatal("Uncaught exception thrown from IO thread, report this!", thr);
++ });
++ }
++
++ /* run() is implemented by superclass */
++
++ /*
++ *
++ * IO thread will perform reads before writes
++ *
++ * How reads/writes are scheduled:
++ *
++ * If read in progress while scheduling write, ignore read and schedule write
++ * If read in progress while scheduling read (no write in progress), chain the read task
++ *
++ *
++ * If write in progress while scheduling read, use the pending write data and ret immediately
++ * If write in progress while scheduling write (ignore read in progress), overwrite the write in progress data
++ *
++ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
++ * it fails to properly propagate write failures. When writes fail the data is kept so future reads will actually
++ * read the failed write data. This should hopefully act as a way to prevent data loss for spurious fails for writing data.
++ *
++ */
++
++ /**
++ * Attempts to bump the priority of all IO tasks for the given chunk coordinates. This has no effect if no tasks are queued.
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param priority Priority level to try to bump to
++ */
++ public void bumpPriority(final WorldServer world, final int chunkX, final int chunkZ, final int priority) {
++ if (!PrioritizedTaskQueue.validPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority: " + priority);
++ }
++
++ final Long key = Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ));
++
++ final ChunkDataTask poiTask = world.poiDataController.tasks.get(key);
++ final ChunkDataTask chunkTask = world.chunkDataController.tasks.get(key);
++
++ if (poiTask != null) {
++ poiTask.raisePriority(priority);
++ }
++ if (chunkTask != null) {
++ chunkTask.raisePriority(priority);
++ }
++ }
++
++ // Hack start
++ /**
++ * if {@code waitForRead} is true, then this task will wait on an available read task, else it will wait on an available
++ * write task
++ * if {@code poiTask} is true, then this task will wait on a poi task, else it will wait on chunk data task
++ * @deprecated API is garbage and will only work for main thread queueing of tasks (which is vanilla), plugins messing
++ * around asynchronously will give unexpected results
++ * @return whether the task succeeded, or {@code null} if there is no task
++ */
++ @Deprecated
++ public Boolean waitForIOToComplete(final WorldServer world, final int chunkX, final int chunkZ, final boolean waitForRead,
++ final boolean poiTask) {
++ final ChunkDataTask task;
++
++ final Long key = IOUtil.getCoordinateKey(chunkX, chunkZ);
++ if (poiTask) {
++ task = world.poiDataController.tasks.get(key);
++ } else {
++ task = world.chunkDataController.tasks.get(key);
++ }
++
++ if (task == null) {
++ return null;
++ }
++
++ if (waitForRead) {
++ ChunkDataController.InProgressRead read = task.inProgressRead;
++ if (read == null) {
++ return null;
++ }
++ return Boolean.valueOf(read.readFuture.join() != null);
++ }
++
++ // wait for write
++ ChunkDataController.InProgressWrite write = task.inProgressWrite;
++ if (write == null) {
++ return null;
++ }
++ return Boolean.valueOf(write.wrote.join() != null);
++ }
++ // Hack end
++
++ public NBTTagCompound getPendingWrite(final WorldServer world, final int chunkX, final int chunkZ, final boolean poiData) {
++ final ChunkDataController taskController = poiData ? world.poiDataController : world.chunkDataController;
++
++ final ChunkDataTask dataTask = taskController.tasks.get(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)));
++
++ if (dataTask == null) {
++ return null;
++ }
++
++ final ChunkDataController.InProgressWrite write = dataTask.inProgressWrite;
++
++ if (write == null) {
++ return null;
++ }
++
++ return write.data;
++ }
++
++ /**
++ * Sets the priority of all IO tasks for the given chunk coordinates. This has no effect if no tasks are queued.
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param priority Priority level to set to
++ */
++ public void setPriority(final WorldServer world, final int chunkX, final int chunkZ, final int priority) {
++ if (!PrioritizedTaskQueue.validPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority: " + priority);
++ }
++
++ final Long key = Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ));
++
++ final ChunkDataTask poiTask = world.poiDataController.tasks.get(key);
++ final ChunkDataTask chunkTask = world.chunkDataController.tasks.get(key);
++
++ if (poiTask != null) {
++ poiTask.updatePriority(priority);
++ }
++ if (chunkTask != null) {
++ chunkTask.updatePriority(priority);
++ }
++ }
++
++ /**
++ * Schedules the chunk data to be written asynchronously.
++ *
++ * Impl notes:
++ *
++ *
++ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
++ * saves must be scheduled before a chunk is unloaded.
++ *
++ *
++ * Writes may be called concurrently, although only the "later" write will go through.
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param poiData Chunk point of interest data. If {@code null}, then no poi data is saved.
++ * @param chunkData Chunk data. If {@code null}, then no chunk data is saved.
++ * @param priority Priority level for this task. See {@link PrioritizedTaskQueue}
++ * @throws IllegalArgumentException If both {@code poiData} and {@code chunkData} are {@code null}.
++ * @throws IllegalStateException If the file io thread has shutdown.
++ */
++ public void scheduleSave(final WorldServer world, final int chunkX, final int chunkZ,
++ final NBTTagCompound poiData, final NBTTagCompound chunkData,
++ final int priority) throws IllegalArgumentException {
++ if (!PrioritizedTaskQueue.validPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority: " + priority);
++ }
++
++ final long writeCounter = this.writeCounter.getAndIncrement();
++
++ if (poiData != null) {
++ this.scheduleWrite(world.poiDataController, world, chunkX, chunkZ, poiData, priority, writeCounter);
++ }
++ if (chunkData != null) {
++ this.scheduleWrite(world.chunkDataController, world, chunkX, chunkZ, chunkData, priority, writeCounter);
++ }
++ }
++
++ private void scheduleWrite(final ChunkDataController dataController, final WorldServer world,
++ final int chunkX, final int chunkZ, final NBTTagCompound data, final int priority, final long writeCounter) {
++ dataController.tasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkDataTask taskRunning) -> {
++ if (taskRunning == null) {
++ // no task is scheduled
++
++ // create task
++ final ChunkDataTask newTask = new ChunkDataTask(priority, world, chunkX, chunkZ, dataController);
++ newTask.inProgressWrite = new ChunkDataController.InProgressWrite();
++ newTask.inProgressWrite.writeCounter = writeCounter;
++ newTask.inProgressWrite.data = data;
++
++ ConcreteFileIOThread.this.queueTask(newTask); // schedule
++ return newTask;
++ }
++
++ taskRunning.raisePriority(priority);
++
++ if (taskRunning.inProgressWrite == null) {
++ taskRunning.inProgressWrite = new ChunkDataController.InProgressWrite();
++ }
++
++ boolean reschedule = taskRunning.inProgressWrite.writeCounter == -1L;
++
++ // synchronize for readers
++ //noinspection SynchronizationOnLocalVariableOrMethodParameter
++ synchronized (taskRunning) {
++ taskRunning.inProgressWrite.data = data;
++ taskRunning.inProgressWrite.writeCounter = writeCounter;
++ }
++
++ if (reschedule) {
++ // We need to reschedule this task since the previous one is not currently scheduled since it failed
++ taskRunning.reschedule(priority);
++ }
++
++ return taskRunning;
++ });
++ }
++
++ /**
++ * Same as {@link #loadChunkDataAsync(WorldServer, int, int, int, Consumer, boolean, boolean, boolean)}, except this function returns
++ * a {@link CompletableFuture} which is potentially completed ASYNCHRONOUSLY ON THE FILE IO THREAD when the load task
++ * has completed.
++ *
++ * Note that if the chunk fails to load the returned future is completed with {@code null}.
++ *
++ */
++ public CompletableFuture loadChunkDataAsyncFuture(final WorldServer world, final int chunkX, final int chunkZ,
++ final int priority, final boolean readPoiData, final boolean readChunkData,
++ final boolean intendingToBlock) {
++ final CompletableFuture future = new CompletableFuture<>();
++ this.loadChunkDataAsync(world, chunkX, chunkZ, priority, future::complete, readPoiData, readChunkData, intendingToBlock);
++ return future;
++ }
++
++ /**
++ * Schedules a load to be executed asynchronously.
++ *
++ * Impl notes:
++ *
++ *
++ * If a chunk fails to load, the {@code onComplete} parameter is completed with {@code null}.
++ *
++ *
++ * It is possible for the {@code onComplete} parameter to be given {@link ChunkData} containing data
++ * this call did not request.
++ *
++ *
++ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
++ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
++ * data is undefined behaviour, and can cause deadlock.
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param priority Priority level for this task. See {@link PrioritizedTaskQueue}
++ * @param onComplete Consumer to execute once this task has completed
++ * @param readPoiData Whether to read point of interest data. If {@code false}, the {@code NBTTagCompound} will be {@code null}.
++ * @param readChunkData Whether to read chunk data. If {@code false}, the {@code NBTTagCompound} will be {@code null}.
++ * @return The {@link PrioritizedTaskQueue.PrioritizedTask} associated with this task. Note that this task does not support
++ * cancellation.
++ */
++ public void loadChunkDataAsync(final WorldServer world, final int chunkX, final int chunkZ,
++ final int priority, final Consumer onComplete,
++ final boolean readPoiData, final boolean readChunkData,
++ final boolean intendingToBlock) {
++ if (!PrioritizedTaskQueue.validPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority: " + priority);
++ }
++
++ if (!(readPoiData | readChunkData)) {
++ throw new IllegalArgumentException("Must read chunk data or poi data");
++ }
++
++ final ChunkData complete = new ChunkData();
++ final boolean[] requireCompletion = new boolean[] { readPoiData, readChunkData };
++
++ if (readPoiData) {
++ this.scheduleRead(world.poiDataController, world, chunkX, chunkZ, (final NBTTagCompound poiData) -> {
++ complete.poiData = poiData;
++
++ final boolean finished;
++
++ // avoid a race condition where the file io thread completes and we complete synchronously
++ // Note: Synchronization can be elided if both of the accesses are volatile
++ synchronized (requireCompletion) {
++ requireCompletion[0] = false; // 0 -> poi data
++ finished = !requireCompletion[1]; // 1 -> chunk data
++ }
++
++ if (finished) {
++ onComplete.accept(complete);
++ }
++ }, priority, intendingToBlock);
++ }
++
++ if (readChunkData) {
++ this.scheduleRead(world.chunkDataController, world, chunkX, chunkZ, (final NBTTagCompound chunkData) -> {
++ complete.chunkData = chunkData;
++
++ final boolean finished;
++
++ // avoid a race condition where the file io thread completes and we complete synchronously
++ // Note: Synchronization can be elided if both of the accesses are volatile
++ synchronized (requireCompletion) {
++ requireCompletion[1] = false; // 1 -> chunk data
++ finished = !requireCompletion[0]; // 0 -> poi data
++ }
++
++ if (finished) {
++ onComplete.accept(complete);
++ }
++ }, priority, intendingToBlock);
++ }
++
++ }
++
++ // Note: the onComplete may be called asynchronously or synchronously here.
++ private void scheduleRead(final ChunkDataController dataController, final WorldServer world,
++ final int chunkX, final int chunkZ, final Consumer onComplete, final int priority,
++ final boolean intendingToBlock) {
++
++ Function tryLoadFunction = (final RegionFile file) -> {
++ if (file == null) {
++ return Boolean.TRUE;
++ }
++ return Boolean.valueOf(file.chunkExists(new ChunkCoordIntPair(chunkX, chunkZ)));
++ };
++
++ dataController.tasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkDataTask running) -> {
++ if (running == null) {
++ // not scheduled
++
++ final Boolean shouldSchedule = intendingToBlock ? dataController.computeForRegionFile(chunkX, chunkZ, tryLoadFunction) :
++ dataController.computeForRegionFileIfLoaded(chunkX, chunkZ, tryLoadFunction);
++
++ if (shouldSchedule == Boolean.FALSE) {
++ // not on disk
++ onComplete.accept(null);
++ return null;
++ }
++
++ // set up task
++ final ChunkDataTask newTask = new ChunkDataTask(priority, world, chunkX, chunkZ, dataController);
++ newTask.inProgressRead = new ChunkDataController.InProgressRead();
++ newTask.inProgressRead.readFuture.thenAccept(onComplete);
++
++ ConcreteFileIOThread.this.queueTask(newTask); // schedule task
++ return newTask;
++ }
++
++ running.raisePriority(priority);
++
++ if (running.inProgressWrite == null) {
++ // chain to the read future
++ running.inProgressRead.readFuture.thenAccept(onComplete);
++ return running;
++ }
++
++ // at this stage we have to use the in progress write's data to avoid an order issue
++ // we don't synchronize since all writes to data occur in the compute() call
++ onComplete.accept(running.inProgressWrite.data);
++ return running;
++ });
++ }
++
++ /**
++ * Same as {@link #loadChunkDataAsync(WorldServer, int, int, int, Consumer, boolean, boolean, boolean)}, except this function returns
++ * the {@link ChunkData} associated with the specified chunk when the task is complete.
++ * @return The chunk data, or {@code null} if the chunk failed to load.
++ */
++ public ChunkData loadChunkData(final WorldServer world, final int chunkX, final int chunkZ, final int priority,
++ final boolean readPoiData, final boolean readChunkData) {
++ return this.loadChunkDataAsyncFuture(world, chunkX, chunkZ, priority, readPoiData, readChunkData, true).join();
++ }
++
++ /**
++ * Schedules the given task at the specified priority to be executed on the IO thread.
++ *
++ * Internal api. Do not use.
++ *
++ */
++ public void runTask(final int priority, final Runnable runnable) {
++ this.queueTask(new GeneralTask(priority, runnable));
++ }
++
++ static final class GeneralTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
++
++ private final Runnable run;
++
++ public GeneralTask(final int priority, final Runnable run) {
++ super(priority);
++ this.run = IOUtil.notNull(run, "Task may not be null");
++ }
++
++ @Override
++ public void run() {
++ try {
++ this.run.run();
++ } catch (final Throwable throwable) {
++ if (throwable instanceof ThreadDeath) {
++ throw (ThreadDeath)throwable;
++ }
++ LOGGER.fatal("Failed to execute general task on IO thread " + IOUtil.genericToString(this.run), throwable);
++ }
++ }
++ }
++
++ public static final class ChunkData {
++
++ public NBTTagCompound poiData;
++ public NBTTagCompound chunkData;
++
++ public ChunkData() {}
++
++ public ChunkData(final NBTTagCompound poiData, final NBTTagCompound chunkData) {
++ this.poiData = poiData;
++ this.chunkData = chunkData;
++ }
++ }
++
++ public static abstract class ChunkDataController {
++
++ // ConcurrentHashMap synchronizes per chain, so reduce the chance of task's hashes colliding.
++ public final ConcurrentHashMap tasks = new ConcurrentHashMap<>(64, 0.5f);
++
++ public abstract void writeData(final int x, final int z, final NBTTagCompound compound) throws IOException;
++ public abstract NBTTagCompound readData(final int x, final int z) throws IOException;
++
++ public abstract T computeForRegionFile(final int chunkX, final int chunkZ, final Function function);
++ public abstract T computeForRegionFileIfLoaded(final int chunkX, final int chunkZ, final Function function);
++
++ public static final class InProgressWrite {
++ public long writeCounter;
++ public NBTTagCompound data;
++
++ // Hack start
++ @Deprecated
++ public CompletableFuture wrote = new CompletableFuture<>();
++ // Hack end
++ }
++
++ public static final class InProgressRead {
++ public final CompletableFuture readFuture = new CompletableFuture<>();
++ }
++ }
++
++ public static final class ChunkDataTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
++
++ public ChunkDataController.InProgressWrite inProgressWrite;
++ public ChunkDataController.InProgressRead inProgressRead;
++
++ private final WorldServer world;
++ private final int x;
++ private final int z;
++ private final ChunkDataController taskController;
++
++ public ChunkDataTask(final int priority, final WorldServer world, final int x, final int z, final ChunkDataController taskController) {
++ super(priority);
++ this.world = world;
++ this.x = x;
++ this.z = z;
++ this.taskController = taskController;
++ }
++
++ @Override
++ public String toString() {
++ return "Task for world: '" + this.world.getWorld().getName() + "' at " + this.x + "," + this.z +
++ " poi: " + (this.taskController == this.world.poiDataController) + ", hash: " + this.hashCode();
++ }
++
++ /*
++ *
++ * IO thread will perform reads before writes
++ *
++ * How reads/writes are scheduled:
++ *
++ * If read in progress while scheduling write, ignore read and schedule write
++ * If read in progress while scheduling read (no write in progress), chain the read task
++ *
++ *
++ * If write in progress while scheduling read, use the pending write data and ret immediately
++ * If write in progress while scheduling write (ignore read in progress), overwrite the write in progress data
++ *
++ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
++ * it fails to properly propagate write failures
++ *
++ */
++
++ void reschedule(final int priority) {
++ // priority is checked before this stage // TODO what
++ this.queue.lazySet(null);
++ this.inProgressWrite.wrote = new CompletableFuture<>(); // Hack
++ this.priority.lazySet(priority);
++ ConcreteFileIOThread.Holder.INSTANCE.queueTask(this);
++ }
++
++ @Override
++ public void run() {
++ ChunkDataController.InProgressRead read = this.inProgressRead;
++ if (read != null) {
++ NBTTagCompound compound = ConcreteFileIOThread.FAILURE_VALUE;
++ try {
++ compound = this.taskController.readData(this.x, this.z);
++ } catch (final Throwable thr) {
++ if (thr instanceof ThreadDeath) {
++ throw (ThreadDeath)thr;
++ }
++ LOGGER.fatal("Failed to read chunk data for task: " + this.toString(), thr);
++ // fall through to complete with null data
++ }
++ read.readFuture.complete(compound);
++ }
++
++ final Long chunkKey = Long.valueOf(IOUtil.getCoordinateKey(this.x, this.z));
++
++ ChunkDataController.InProgressWrite write = this.inProgressWrite;
++
++ if (write == null) {
++ // IntelliJ warns this is invalid, however it does not consider that writes to the task map & the inProgress field can occur concurrently.
++ ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final Long keyInMap, final ChunkDataTask valueInMap) -> {
++ if (valueInMap == null) {
++ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
++ }
++ if (valueInMap != ChunkDataTask.this) {
++ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
++ }
++ return valueInMap.inProgressWrite == null ? null : valueInMap;
++ });
++
++ if (inMap == null) {
++ return; // set the task value to null, indicating we're done
++ }
++
++ // not null, which means there was a concurrent write
++ write = this.inProgressWrite;
++ }
++
++ // check if another process is writing
++ try {
++ this.world.checkSession();
++ } catch (final ExceptionWorldConflict ex) {
++ LOGGER.fatal("Couldn't save chunk; already in use by another instance of Minecraft?", ex);
++ // we don't need to set the write counter to -1 as we know at this stage there's no point in re-scheduling
++ // writes since they'll fail anyways.
++ write.wrote.complete(ConcreteFileIOThread.FAILURE_VALUE); // Hack - However we need to fail the write
++ return;
++ }
++
++ for (;;) {
++ final long writeCounter;
++ final NBTTagCompound data;
++
++ //noinspection SynchronizationOnLocalVariableOrMethodParameter
++ synchronized (write) {
++ writeCounter = write.writeCounter;
++ data = write.data;
++ }
++
++ boolean failedWrite = false;
++
++ try {
++ this.taskController.writeData(this.x, this.z, data);
++ } catch (final Throwable thr) {
++ if (thr instanceof ThreadDeath) {
++ throw (ThreadDeath)thr;
++ }
++ LOGGER.fatal("Failed to write chunk data for task: " + this.toString(), thr);
++ failedWrite = true;
++ }
++
++ boolean finalFailWrite = failedWrite;
++ boolean[] returnFailWrite = new boolean[] { false };
++
++ ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final Long keyInMap, final ChunkDataTask valueInMap) -> {
++ if (valueInMap == null) {
++ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
++ }
++ if (valueInMap != ChunkDataTask.this) {
++ ChunkDataTask.this.inProgressWrite.wrote.complete(ConcreteFileIOThread.FAILURE_VALUE); // Hack
++ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
++ }
++ if (finalFailWrite) {
++ if (valueInMap.inProgressWrite.writeCounter == writeCounter) {
++ valueInMap.inProgressWrite.writeCounter = -1L;
++ returnFailWrite[0] = true;
++ }
++ // Hack start
++ valueInMap.inProgressWrite.wrote.complete(ConcreteFileIOThread.FAILURE_VALUE);
++ return valueInMap;
++ }
++ if (valueInMap.inProgressWrite.writeCounter == writeCounter) {
++ valueInMap.inProgressWrite.wrote.complete(data);
++ return null;
++ }
++ return valueInMap;
++ // Hack end
++ });
++
++ if (inMap == null || returnFailWrite[0]) {
++ // write counter matched, so we wrote the most up-to-date pending data, we're done here
++ // or we failed to write and successfully set the write counter to -1
++ return; // we're done here
++ }
++
++ // fetch & write new data
++ continue;
++ }
++ }
++ }
++}
+diff --git a/src/main/java/com/destroystokyo/paper/io/IOUtil.java b/src/main/java/com/destroystokyo/paper/io/IOUtil.java
+new file mode 100644
+index 0000000000..5af0ac3d9e
+--- /dev/null
++++ b/src/main/java/com/destroystokyo/paper/io/IOUtil.java
+@@ -0,0 +1,62 @@
++package com.destroystokyo.paper.io;
++
++import org.bukkit.Bukkit;
++
++public final class IOUtil {
++
++ /* Copied from concrete or concurrentutil */
++
++ public static long getCoordinateKey(final int x, final int z) {
++ return ((long)z << 32) | (x & 0xFFFFFFFFL);
++ }
++
++ public static int getCoordinateX(final long key) {
++ return (int)key;
++ }
++
++ public static int getCoordinateZ(final long key) {
++ return (int)(key >>> 32);
++ }
++
++ public static int getRegionCoordinate(final int chunkCoordinate) {
++ return chunkCoordinate >> 5;
++ }
++
++ public static int getChunkInRegion(final int chunkCoordinate) {
++ return chunkCoordinate & 31;
++ }
++
++ public static String genericToString(final Object object) {
++ return object == null ? "null" : object.getClass().getName() + ":" + object.toString();
++ }
++
++ public static T notNull(final T obj) {
++ if (obj == null) {
++ throw new NullPointerException();
++ }
++ return obj;
++ }
++
++ public static T notNull(final T obj, final String msgIfNull) {
++ if (obj == null) {
++ throw new NullPointerException(msgIfNull);
++ }
++ return obj;
++ }
++
++ public static void arrayBounds(final int off, final int len, final int arrayLength, final String msgPrefix) {
++ if (off < 0 || len < 0 || (arrayLength - off) < len) {
++ throw new ArrayIndexOutOfBoundsException(msgPrefix + ": off: " + off + ", len: " + len + ", array length: " + arrayLength);
++ }
++ }
++
++ public static int getPriorityForCurrentThread() {
++ return Bukkit.isPrimaryThread() ? PrioritizedTaskQueue.HIGHEST_PRIORITY : PrioritizedTaskQueue.NORMAL_PRIORITY;
++ }
++
++ @SuppressWarnings("unchecked")
++ public static void rethrow(final Throwable throwable) throws T {
++ throw (T)throwable;
++ }
++
++}
+diff --git a/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java b/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java
+new file mode 100644
+index 0000000000..c3ca3c4a1c
+--- /dev/null
++++ b/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java
+@@ -0,0 +1,258 @@
++package com.destroystokyo.paper.io;
++
++import java.util.concurrent.ConcurrentLinkedQueue;
++import java.util.concurrent.atomic.AtomicBoolean;
++import java.util.concurrent.atomic.AtomicInteger;
++import java.util.concurrent.atomic.AtomicReference;
++
++public class PrioritizedTaskQueue {
++
++ // lower numbers are a higher priority (except < 0)
++ // higher priorities are always executed before lower priorities
++
++ /**
++ * Priority value indicating the task has completed or is being completed.
++ */
++ public static final int COMPLETING_PRIORITY = -1;
++
++ /**
++ * Highest priority, should only be used for main thread tasks or tasks that are blocking the main thread.
++ */
++ public static final int HIGHEST_PRIORITY = 0;
++
++ /**
++ * Should be only used in an IO task so that chunk loads do not wait on other IO tasks.
++ * This only exists because IO tasks are scheduled before chunk load tasks to decrease IO waiting times.
++ */
++ public static final int HIGHER_PRIORITY = 1;
++
++ /**
++ * Should be used for scheduling chunk loads/generation that would increase response times to users.
++ */
++ public static final int HIGH_PRIORITY = 2;
++
++ /**
++ * Default priority.
++ */
++ public static final int NORMAL_PRIORITY = 3;
++
++ /**
++ * Use for tasks not at all critical and can potentially be delayed.
++ */
++ public static final int LOW_PRIORITY = 4;
++
++ /**
++ * Use for tasks that should "eventually" execute.
++ */
++ public static final int LOWEST_PRIORITY = 5;
++
++ private static final int TOTAL_PRIORITIES = 6;
++
++ final ConcurrentLinkedQueue[] queues = (ConcurrentLinkedQueue[])new ConcurrentLinkedQueue[TOTAL_PRIORITIES];
++
++ private final AtomicBoolean shutdown = new AtomicBoolean();
++
++ {
++ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
++ this.queues[i] = new ConcurrentLinkedQueue<>();
++ }
++ }
++
++ /**
++ * Returns whether the specified priority is valid
++ */
++ public static boolean validPriority(final int priority) {
++ return priority >= 0 && priority < TOTAL_PRIORITIES;
++ }
++
++ /**
++ * Queues a task.
++ * @throws IllegalStateException If the task has already been queued. Use {@link PrioritizedTask#raisePriority(int)} to
++ * raise a task's priority.
++ * This can also be thrown if the queue has shutdown.
++ */
++ public void add(final T task) throws IllegalStateException {
++ task.onQueue(this);
++ this.queues[task.getPriority()].add(task);
++ if (this.shutdown.get()) {
++ // note: we're not actually sure at this point if our task will go through
++ throw new IllegalStateException("Queue has shutdown, refusing to execute task " + IOUtil.genericToString(task));
++ }
++ }
++
++ /**
++ * Polls the highest priority task currently available. {@code null} if none.
++ */
++ public T poll() {
++ T task;
++ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
++ final ConcurrentLinkedQueue queue = this.queues[i];
++
++ while ((task = queue.poll()) != null) {
++ final int prevPriority = task.tryComplete(i);
++ if (prevPriority != COMPLETING_PRIORITY && prevPriority <= i) {
++ // if the prev priority was greater-than or equal to our current priority
++ return task;
++ }
++ }
++ }
++
++ return null;
++ }
++
++ /**
++ * Prevent further additions to this queue. Attempts to add after this call has completed (potentially during) will
++ * result in {@link IllegalStateException} being thrown.
++ *
++ * This operation is atomic with respect to other shutdown calls
++ *
++ *
++ * After this call has completed, regardless of return value, this queue will be shutdown.
++ *
++ * @return {@code true} if the queue was shutdown, {@code false} if it has shut down already
++ */
++ public boolean shutdown() {
++ return this.shutdown.getAndSet(false);
++ }
++
++ public abstract static class PrioritizedTask {
++
++ protected final AtomicReference queue = new AtomicReference<>();
++
++ protected final AtomicInteger priority;
++
++ protected PrioritizedTask() {
++ this(PrioritizedTaskQueue.NORMAL_PRIORITY);
++ }
++
++ protected PrioritizedTask(final int priority) {
++ if (!PrioritizedTaskQueue.validPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.priority = new AtomicInteger(priority);
++ }
++
++ /**
++ * Returns the current priority. Note that {@link PrioritizedTaskQueue#COMPLETING_PRIORITY} will be returned
++ * if this task is completing or has completed.
++ */
++ public final int getPriority() {
++ return this.priority.get();
++ }
++
++ /**
++ * Returns whether this task is scheduled to execute, or has been already executed.
++ */
++ public boolean isScheduled() {
++ return this.queue.get() != null;
++ }
++
++ final int tryComplete(final int minPriority) {
++ for (int curr = this.getPriorityVolatile();;) {
++ if (curr == COMPLETING_PRIORITY) {
++ return COMPLETING_PRIORITY;
++ }
++ if (curr > minPriority) {
++ // curr is lower priority
++ return curr;
++ }
++
++ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, COMPLETING_PRIORITY))) {
++ return curr;
++ }
++ continue;
++ }
++ }
++
++ /**
++ * Forces this task to be completed.
++ * @return {@code true} if the task was cancelled, {@code false} if the task has already completed or is being completed.
++ */
++ public boolean cancel() {
++ return this.exchangePriorityVolatile(PrioritizedTaskQueue.COMPLETING_PRIORITY) != PrioritizedTaskQueue.COMPLETING_PRIORITY;
++ }
++
++ /**
++ * Attempts to raise the priority to the priority level specified.
++ * @param priority Priority specified
++ * @return {@code true} if successful, {@code false} otherwise.
++ */
++ public boolean raisePriority(final int priority) {
++ if (!PrioritizedTaskQueue.validPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority");
++ }
++
++ for (int curr = this.getPriorityVolatile();;) {
++ if (curr == COMPLETING_PRIORITY) {
++ return false;
++ }
++ if (priority >= curr) {
++ return true;
++ }
++
++ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority))) {
++ PrioritizedTaskQueue queue = this.queue.get();
++ if (queue != null) {
++ //noinspection unchecked
++ queue.queues[priority].add(this); // silently fail on shutdown
++ }
++ return true;
++ }
++ continue;
++ }
++ }
++
++ /**
++ * Attempts to set this task's priority level to the level specified.
++ * @param priority Specified priority level.
++ * @return {@code true} if successful, {@code false} if this task is completing or has completed.
++ */
++ public boolean updatePriority(final int priority) {
++ if (!PrioritizedTaskQueue.validPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority");
++ }
++
++ for (int curr = this.getPriorityVolatile();;) {
++ if (curr == COMPLETING_PRIORITY) {
++ return false;
++ }
++ if (curr == priority) {
++ return true;
++ }
++
++ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority))) {
++ PrioritizedTaskQueue queue = this.queue.get();
++ if (queue != null) {
++ //noinspection unchecked
++ queue.queues[priority].add(this); // silently fail on shutdown
++ }
++ return true;
++ }
++ continue;
++ }
++ }
++
++ void onQueue(final PrioritizedTaskQueue queue) {
++ if (this.queue.getAndSet(queue) != null) {
++ throw new IllegalStateException("Already queued!");
++ }
++ }
++
++ /* priority */
++
++ protected final int getPriorityVolatile() {
++ return this.priority.get();
++ }
++
++ protected final int compareAndExchangePriorityVolatile(final int expect, final int update) {
++ if (this.priority.compareAndSet(expect, update)) {
++ return expect;
++ }
++ return this.priority.get();
++ }
++
++ protected final int exchangePriorityVolatile(final int value) {
++ return this.priority.getAndSet(value);
++ }
++ }
++}
+diff --git a/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java b/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java
+new file mode 100644
+index 0000000000..f127ef236e
+--- /dev/null
++++ b/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java
+@@ -0,0 +1,244 @@
++package com.destroystokyo.paper.io;
++
++import net.minecraft.server.MinecraftServer;
++import org.apache.logging.log4j.Logger;
++
++import java.util.concurrent.ConcurrentLinkedQueue;
++import java.util.concurrent.atomic.AtomicBoolean;
++import java.util.concurrent.locks.LockSupport;
++
++public class QueueExecutorThread extends Thread {
++
++ private static final Logger LOGGER = MinecraftServer.LOGGER;
++
++ protected final PrioritizedTaskQueue queue;
++ protected final long spinWaitTime;
++
++ protected volatile boolean closed;
++
++ protected final AtomicBoolean parked = new AtomicBoolean();
++
++ protected volatile ConcurrentLinkedQueue flushQueue = new ConcurrentLinkedQueue<>();
++
++ // this is required to synchronize LockSupport#park()
++ // LockSupport explicitly states that it will only follow ordering with respect to volatile access
++ // see flush() for more details
++ protected volatile long flushCounter;
++
++ public QueueExecutorThread(final PrioritizedTaskQueue queue) {
++ this(queue, (int)(1.e6)); // 1.0ms
++ }
++
++ public QueueExecutorThread(final PrioritizedTaskQueue queue, final long spinWaitTime) { // in ms
++ this.queue = queue;
++ this.spinWaitTime = spinWaitTime;
++ }
++
++ @Override
++ public void run() {
++ final long spinWaitTime = this.spinWaitTime;
++ main_loop:
++ for (;;) {
++ this.pollTasks(true);
++
++ // spinwait
++
++ final long start = System.nanoTime();
++
++ for (;;) {
++ // If we are interrpted for any reason, park() will always return immediately. Clear so that we don't needlessly use cpu in such an event.
++ Thread.interrupted();
++ LockSupport.parkNanos("Spinwaiting on tasks", 1000L); // 1us
++
++ if (this.pollTasks(true)) {
++ // restart loop, found tasks
++ continue main_loop;
++ }
++
++ if (this.handleClose()) {
++ return; // we're done
++ }
++
++ if ((System.nanoTime() - start) >= spinWaitTime) {
++ break;
++ }
++ }
++
++ if (this.handleClose()) {
++ return;
++ }
++
++ this.parked.set(true);
++ // We need to parse here to avoid a race condition where a thread queues a task before we set parked to true
++ // (i.e it will not notify us)
++
++ // it also resolves race condition where we've overriden a concurrent thread's flush call which set parked to false
++ // the important ordering: (volatile guarantees we cannot re-order the below events)
++ // us: parked -> true, parse tasks -> writeCounter + 1 -> drain flush queue
++ // them: read write counter -> add to flush queue -> write parked to false -> park loop
++
++ // if we overwrite their set parked to false call then they're in the park loop or about to be, and we're about to
++ // drain the flush queue
++ if (this.pollTasks(true)) {
++ this.parked.set(false);
++ continue;
++ }
++ if (this.handleClose()) {
++ return;
++ }
++
++ // we don't need to check parked before sleeping, but we do need to check parked in a do-while loop
++ // LockSupport.park() can fail for any reason
++ do {
++ Thread.interrupted();
++ LockSupport.park("Waiting on tasks");
++ } while (this.parked.get());
++ }
++ }
++
++ protected boolean handleClose() {
++ if (this.closed) {
++ this.pollTasks(true); // this ensures we've emptied the queue
++ this.handleFlushThreads(true);
++ return true;
++ }
++ return false;
++ }
++
++ protected boolean pollTasks(boolean flushTasks) {
++ Runnable task;
++ boolean ret = false;
++
++ while ((task = this.queue.poll()) != null) {
++ ret = true;
++ try {
++ task.run();
++ } catch (final Throwable throwable) {
++ if (throwable instanceof ThreadDeath) {
++ throw (ThreadDeath)throwable;
++ }
++ LOGGER.fatal("Exception thrown from prioritized runnable task in thread '" + this.getName() + "': " + IOUtil.genericToString(task), throwable);
++ }
++ }
++
++ if (flushTasks) {
++ this.handleFlushThreads(false);
++ }
++
++ return ret;
++ }
++
++ protected void handleFlushThreads(final boolean shutdown) {
++ final ConcurrentLinkedQueue flushQueue = this.flushQueue; // Note: this can be a plain read
++ if (shutdown) {
++ this.flushQueue = null; // Note: this can be a release write
++ }
++
++ Thread current;
++
++ while ((current = flushQueue.poll()) != null) {
++ this.pollTasks(false);
++ // increment flush counter so threads will wake up after being unparked()
++ //noinspection NonAtomicOperationOnVolatileField
++ ++this.flushCounter; // may be plain read plain write if we order before poll() (also would need to re-order pollTasks)
++ LockSupport.unpark(current);
++ }
++ }
++
++ /**
++ * Notify's this thread that a task has been added to its queue
++ * @return {@code true} if this thread was waiting for tasks, {@code false} if it is executing tasks
++ */
++ public boolean notifyTasks() {
++ if (this.parked.get() && this.parked.getAndSet(false)) {
++ LockSupport.unpark(this);
++ return true;
++ }
++ return false;
++ }
++
++ protected void queueTask(final T task) {
++ this.queue.add(task);
++ this.notifyTasks();
++ }
++
++
++ /**
++ * Waits until this thread's queue is empty.
++ *
++ * @throws IllegalStateException If the current thread is {@code this} thread.
++ */
++ public void flush() {
++ final Thread currentThread = Thread.currentThread();
++
++ if (currentThread == this) {
++ // avoid deadlock
++ throw new IllegalStateException("Cannot flush the queue executor thread while on the queue executor thread");
++ }
++
++ // order is important
++
++ long flushCounter = this.flushCounter;
++
++ ConcurrentLinkedQueue flushQueue = this.flushQueue;
++
++ // it's important to read the flush queue after the flush counter to ensure that if we proceed from here
++ // we have a flush counter that would be different from the final flush counter if the queue executor shuts down
++ // the double read of the flush queue is not enough to account for this since
++ if (flushQueue == null) {
++ return; // queue executor has received shutdown and emptied queue
++ }
++
++ flushQueue.add(currentThread);
++
++ // re-check null flush queue, we need to guarantee the executor is not shutting down before parking
++
++ if (this.flushQueue == null) {
++ // cannot guarantee state of flush queue now, the executor is done though
++ return;
++ }
++
++ // force a response from the IO thread, we're not sure of its state currently
++ this.parked.set(false);
++ LockSupport.unpark(this);
++
++ // Note: see the run() function for handling of a race condition where the queue executor overwrites our parked write
++
++ boolean interrupted = false; // preserve interrupted status
++
++ while (this.flushCounter == flushCounter) {
++ interrupted |= Thread.interrupted();
++ LockSupport.park();
++ }
++
++ if (interrupted) {
++ Thread.currentThread().interrupt();
++ }
++ }
++
++ /**
++ * Closes this queue executor's queue and optionally waits for it to empty.
++ *
++ * If wait is {@code true}, then the queue will be empty by the time this call completes.
++ *
++ *
++ * This function is MT-Safe.
++ *
++ * @param wait If this call is to wait until the queue is empty
++ * @param killQueue Whether to shutdown this thread's queue
++ * @return whether this thread shut down the queue
++ */
++ public boolean close(final boolean wait, final boolean killQueue) {
++ boolean ret = !killQueue ? false : this.queue.shutdown();
++ this.closed = true;
++
++ // force thread to respond to the shutdown
++ this.parked.set(false);
++ LockSupport.unpark(this);
++
++ if (wait) {
++ this.flush();
++ }
++ return ret;
++ }
++}
+diff --git a/src/main/java/com/destroystokyo/paper/io/chunk/ChunkLoadTask.java b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkLoadTask.java
+new file mode 100644
+index 0000000000..24f231bf45
+--- /dev/null
++++ b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkLoadTask.java
+@@ -0,0 +1,120 @@
++package com.destroystokyo.paper.io.chunk;
++
++import co.aikar.timings.Timing;
++import com.destroystokyo.paper.io.ConcreteFileIOThread;
++import com.destroystokyo.paper.io.IOUtil;
++import net.minecraft.server.ChunkCoordIntPair;
++import net.minecraft.server.ChunkRegionLoader;
++import net.minecraft.server.PlayerChunkMap;
++import net.minecraft.server.WorldServer;
++
++import java.util.ArrayDeque;
++import java.util.function.Consumer;
++
++public final class ChunkLoadTask extends ChunkTask {
++
++ final Consumer onComplete;
++ public ConcreteFileIOThread.ChunkData chunkData;
++
++ private boolean hasCompleted;
++
++ public ChunkLoadTask(final WorldServer world, final int chunkX, final int chunkZ, final int priority,
++ final ChunkTaskManager taskManager,
++ final Consumer onComplete) {
++ super(world, chunkX, chunkZ, priority, taskManager);
++ this.onComplete = onComplete;
++ }
++
++ private static final ArrayDeque EMPTY_QUEUE = new ArrayDeque<>();
++
++ private static ChunkRegionLoader.InProgressChunkHolder createEmptyHolder() {
++ return new ChunkRegionLoader.InProgressChunkHolder(null, EMPTY_QUEUE);
++ }
++
++ @Override
++ public void run() {
++ try {
++ this.executeTask();
++ } catch (final Throwable ex) {
++ ConcreteFileIOThread.LOGGER.error("Failed to execute chunk load task: " + this.toString(), ex);
++ if (!this.hasCompleted) {
++ this.complete(ChunkLoadTask.createEmptyHolder());
++ }
++ }
++ }
++
++ public void executeTask() {
++ // either executed synchronously or asynchronously
++ final ConcreteFileIOThread.ChunkData chunkData = this.chunkData;
++
++ if (chunkData.poiData == ConcreteFileIOThread.FAILURE_VALUE || chunkData.chunkData == ConcreteFileIOThread.FAILURE_VALUE) {
++ ConcreteFileIOThread.LOGGER.error("Could not load chunk for task: " + this.toString() + ", file IO thread has dumped the relevant exception above");
++ this.complete(ChunkLoadTask.createEmptyHolder());
++ return;
++ }
++
++ if (chunkData.chunkData == null) {
++ // not on disk
++ this.complete(ChunkLoadTask.createEmptyHolder());
++ return;
++ }
++
++ final ChunkCoordIntPair chunkPos = new ChunkCoordIntPair(this.chunkX, this.chunkZ);
++
++ final PlayerChunkMap chunkManager = this.world.getChunkProvider().playerChunkMap;
++
++ try (Timing ignored = this.world.timings.chunkIOStage1.startTimingIfSync()) {
++ final ChunkRegionLoader.InProgressChunkHolder chunkHolder;
++
++ // apply fixes
++
++ try {
++ if (chunkData.poiData != null) {
++ chunkData.poiData = chunkData.poiData.clone(); // clone data for safety, file IO thread does not clone
++ }
++ chunkData.chunkData = chunkManager.getChunkData(this.world.getWorldProvider().getDimensionManager(),
++ chunkManager.getWorldPersistentDataSupplier(), chunkData.chunkData.clone(), chunkPos, this.world); // clone data for safety, file IO thread does not clone
++ } catch (final Throwable ex) {
++ ConcreteFileIOThread.LOGGER.error("Could not apply datafixers for chunk task: " + this.toString(), ex);
++ this.complete(ChunkLoadTask.createEmptyHolder());
++ }
++
++ try {
++ this.world.getChunkProvider().playerChunkMap.updateChunkStatusOnDisk(chunkPos, chunkData.chunkData);
++ } catch (final Throwable ex) {
++ ConcreteFileIOThread.LOGGER.warn("Failed to update chunk status cache for task: " + this.toString(), ex);
++ // non-fatal, continue
++ }
++
++ try {
++ chunkHolder = ChunkRegionLoader.loadChunk(this.world,
++ chunkManager.definedStructureManager, chunkManager.getVillagePlace(), chunkPos,
++ chunkData.chunkData, true);
++ } catch (final Throwable ex) {
++ ConcreteFileIOThread.LOGGER.error("Could not de-serialize chunk data for task: " + this.toString(), ex);
++ this.complete(ChunkLoadTask.createEmptyHolder());
++ return;
++ }
++
++ this.complete(chunkHolder);
++ }
++ }
++
++ private void complete(final ChunkRegionLoader.InProgressChunkHolder holder) {
++ this.hasCompleted = true;
++ holder.poiData = this.chunkData == null ? null : this.chunkData.poiData;
++
++ try {
++ this.onComplete.accept(holder);
++ } catch (final Throwable thr) {
++ ConcreteFileIOThread.LOGGER.error("Failed to complete chunk data for task: " + this.toString(), thr);
++ }
++
++ this.taskManager.chunkLoadTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(this.chunkX, this.chunkZ)), (final Long keyInMap, final ChunkLoadTask valueInMap) -> {
++ if (valueInMap != ChunkLoadTask.this) {
++ throw new IllegalStateException("Expected this task to be scheduled, but another was! Other:" + valueInMap + ", current: " + ChunkLoadTask.this);
++ }
++ return null;
++ });
++ }
++}
+diff --git a/src/main/java/com/destroystokyo/paper/io/chunk/ChunkSaveTask.java b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkSaveTask.java
+new file mode 100644
+index 0000000000..c3a6b482c2
+--- /dev/null
++++ b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkSaveTask.java
+@@ -0,0 +1,114 @@
++package com.destroystokyo.paper.io.chunk;
++
++import co.aikar.timings.Timing;
++import com.destroystokyo.paper.io.ConcreteFileIOThread;
++import com.destroystokyo.paper.io.IOUtil;
++import com.destroystokyo.paper.io.PrioritizedTaskQueue;
++import net.minecraft.server.ChunkRegionLoader;
++import net.minecraft.server.IAsyncTaskHandler;
++import net.minecraft.server.IChunkAccess;
++import net.minecraft.server.MCUtil;
++import net.minecraft.server.MinecraftServer;
++import net.minecraft.server.NBTTagCompound;
++import net.minecraft.server.WorldServer;
++
++import java.util.concurrent.CompletableFuture;
++import java.util.concurrent.atomic.AtomicInteger;
++
++public final class ChunkSaveTask extends ChunkTask {
++
++ public final ChunkRegionLoader.AsyncSaveData asyncSaveData;
++ public final IChunkAccess chunk;
++ public final CompletableFuture onComplete = new CompletableFuture<>();
++
++ private final AtomicInteger attemptedPriority;
++
++ public ChunkSaveTask(final WorldServer world, final int chunkX, final int chunkZ, final int priority,
++ final ChunkTaskManager taskManager, final ChunkRegionLoader.AsyncSaveData asyncSaveData,
++ final IChunkAccess chunk) {
++ super(world, chunkX, chunkZ, priority, taskManager);
++ this.chunk = chunk;
++ this.asyncSaveData = asyncSaveData;
++ this.attemptedPriority = new AtomicInteger(priority);
++ }
++
++ @Override
++ public void run() {
++ // can be executed asynchronously or synchronously
++ final NBTTagCompound compound;
++
++ try (Timing ignored = this.world.timings.chunkUnloadDataSave.startTimingIfSync()) {
++ compound = ChunkRegionLoader.saveChunk(this.world, this.chunk, this.asyncSaveData);
++ } catch (final Throwable ex) {
++ // has a plugin modified something it should not have and made us CME?
++ ConcreteFileIOThread.LOGGER.error("Failed to serialize unloading chunk data for task: " + this.toString() + ", falling back to a synchronous execution", ex);
++
++ // Note: We add to the server thread queue here since this is what the server will drain tasks from
++ // when waiting for chunks
++ ((IAsyncTaskHandler)this.world.getChunkProvider().serverThreadQueue).addTask(() -> {
++ try (Timing ignored = this.world.timings.chunkUnloadDataSave.startTiming()) {
++ NBTTagCompound data = ConcreteFileIOThread.FAILURE_VALUE;
++
++ try {
++ data = ChunkRegionLoader.saveChunk(this.world, this.chunk, this.asyncSaveData);
++ ConcreteFileIOThread.LOGGER.info("Successfully serialized chunk data for task: " + this.toString() + " synchronously");
++ } catch (final Throwable ex1) {
++ ConcreteFileIOThread.LOGGER.fatal("Failed to synchronously serialize unloading chunk data for task: " + this.toString() + "! Chunk data will be lost", ex1);
++ }
++
++ ChunkSaveTask.this.complete(data);
++ }
++ });
++
++ return; // the main thread will now complete the data
++ }
++
++ this.complete(compound);
++ }
++
++ @Override
++ public boolean raisePriority(final int priority) {
++ if (!PrioritizedTaskQueue.validPriority(priority)) {
++ throw new IllegalStateException("Invalid priority: " + priority);
++ }
++
++ // we know priority is valid here
++ for (int curr = this.attemptedPriority.get();;) {
++ if (curr <= priority) {
++ break; // curr is higher/same priority
++ }
++ if (this.attemptedPriority.compareAndSet(curr, priority)) {
++ break;
++ }
++ curr = this.attemptedPriority.get();
++ }
++
++ return super.raisePriority(priority);
++ }
++
++ @Override
++ public boolean updatePriority(final int priority) {
++ if (!PrioritizedTaskQueue.validPriority(priority)) {
++ throw new IllegalStateException("Invalid priority: " + priority);
++ }
++ this.attemptedPriority.set(priority);
++ return super.updatePriority(priority);
++ }
++
++ private void complete(final NBTTagCompound compound) {
++ try {
++ this.onComplete.complete(compound);
++ } catch (final Throwable thr) {
++ ConcreteFileIOThread.LOGGER.error("Failed to complete chunk data for task: " + this.toString(), thr);
++ }
++ if (compound != ConcreteFileIOThread.FAILURE_VALUE) {
++ ConcreteFileIOThread.Holder.INSTANCE.scheduleSave(this.world, this.chunkX, this.chunkZ, null, compound, this.attemptedPriority.get());
++ }
++ this.taskManager.chunkSaveTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(this.chunkX, this.chunkZ)), (final Long keyInMap, final ChunkSaveTask valueInMap) -> {
++ if (valueInMap != ChunkSaveTask.this) {
++ throw new IllegalStateException("Expected this task to be scheduled, but another was! Other:" + valueInMap + ", this: " + ChunkSaveTask.this);
++ }
++ return null;
++ });
++ }
++}
+diff --git a/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTask.java b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTask.java
+new file mode 100644
+index 0000000000..400fae5d09
+--- /dev/null
++++ b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTask.java
+@@ -0,0 +1,40 @@
++package com.destroystokyo.paper.io.chunk;
++
++import com.destroystokyo.paper.io.ConcreteFileIOThread;
++import com.destroystokyo.paper.io.PrioritizedTaskQueue;
++import net.minecraft.server.WorldServer;
++
++abstract class ChunkTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
++
++ public final WorldServer world;
++ public final int chunkX;
++ public final int chunkZ;
++ public final ChunkTaskManager taskManager;
++
++ public ChunkTask(final WorldServer world, final int chunkX, final int chunkZ, final int priority,
++ final ChunkTaskManager taskManager) {
++ super(priority);
++ this.world = world;
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ this.taskManager = taskManager;
++ }
++
++ @Override
++ public String toString() {
++ return "Chunk task: class:" + this.getClass().getName() + ", for world '" + this.world.getWorld().getName() +
++ "', (" + this.chunkX + "," + this.chunkZ + "), hashcode:" + this.hashCode();
++ }
++
++ @Override
++ public boolean raisePriority(final int priority) {
++ ConcreteFileIOThread.Holder.INSTANCE.bumpPriority(this.world, this.chunkX, this.chunkZ, priority);
++ return super.raisePriority(priority);
++ }
++
++ @Override
++ public boolean updatePriority(final int priority) {
++ ConcreteFileIOThread.Holder.INSTANCE.setPriority(this.world, this.chunkX, this.chunkZ, priority);
++ return super.updatePriority(priority);
++ }
++}
+diff --git a/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTaskManager.java b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTaskManager.java
+new file mode 100644
+index 0000000000..373793c488
+--- /dev/null
++++ b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTaskManager.java
+@@ -0,0 +1,303 @@
++package com.destroystokyo.paper.io.chunk;
++
++import com.destroystokyo.paper.io.ConcreteFileIOThread;
++import com.destroystokyo.paper.io.IOUtil;
++import com.destroystokyo.paper.io.PrioritizedTaskQueue;
++import com.destroystokyo.paper.io.QueueExecutorThread;
++import net.minecraft.server.ChunkRegionLoader;
++import net.minecraft.server.IAsyncTaskHandler;
++import net.minecraft.server.IChunkAccess;
++import net.minecraft.server.MinecraftServer;
++import net.minecraft.server.NBTTagCompound;
++import net.minecraft.server.WorldServer;
++import org.bukkit.Bukkit;
++import org.spigotmc.AsyncCatcher;
++
++import java.util.concurrent.CompletableFuture;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.function.Consumer;
++
++public final class ChunkTaskManager {
++
++ private final QueueExecutorThread[] workers;
++ private final WorldServer world;
++
++ private final PrioritizedTaskQueue queue;
++ private final boolean perWorldQueue;
++
++ final ConcurrentHashMap chunkLoadTasks = new ConcurrentHashMap<>(64, 0.5f);
++ final ConcurrentHashMap chunkSaveTasks = new ConcurrentHashMap<>(64, 0.5f);
++
++ // used if async chunks are disabled in config
++ protected static QueueExecutorThread[] globalWorkers;
++ protected static PrioritizedTaskQueue globalQueue;
++
++ public static void initGlobalLoadThreads(int threads) {
++ if (threads <= 0) {
++ return;
++ }
++
++ globalWorkers = new QueueExecutorThread[threads];
++ globalQueue = new PrioritizedTaskQueue<>();
++
++ for (int i = 0; i < threads; ++i) {
++ globalWorkers[i] = new QueueExecutorThread<>(globalQueue, (long)0.10e6); //0.1ms
++ globalWorkers[i].setName("Async chunk loader thread #" + i);
++ globalWorkers[i].setPriority(Thread.NORM_PRIORITY - 1);
++ globalWorkers[i].setUncaughtExceptionHandler((final Thread thread, final Throwable throwable) -> {
++ ConcreteFileIOThread.LOGGER.fatal("Thread '" + thread.getName() + "' threw an uncaught exception!", throwable);
++ });
++
++ globalWorkers[i].start();
++ }
++ }
++
++ /**
++ * Creates this chunk task manager to operate off the specified number of threads. If the specified number of threads is
++ * less-than or equal to 0, then this chunk task manager will operate off of the world's chunk task queue.
++ * @param world Specified world.
++ * @param threads Specified number of threads.
++ * @see net.minecraft.server.ChunkProviderServer#serverThreadQueue
++ */
++ public ChunkTaskManager(final WorldServer world, final int threads) {
++ this.world = world;
++ this.workers = threads <= 0 ? null : new QueueExecutorThread[threads];
++ this.queue = new PrioritizedTaskQueue<>();
++ this.perWorldQueue = true;
++
++ for (int i = 0; i < threads; ++i) {
++ this.workers[i] = new QueueExecutorThread<>(this.queue, (long)0.10e6); //0.1ms
++ this.workers[i].setName("Async chunk loader thread #" + i + " for world: " + world.getWorldData().getName());
++ this.workers[i].setPriority(Thread.NORM_PRIORITY - 1);
++ this.workers[i].setUncaughtExceptionHandler((final Thread thread, final Throwable throwable) -> {
++ ConcreteFileIOThread.LOGGER.fatal("Thread '" + thread.getName() + "' threw an uncaught exception!", throwable);
++ });
++
++ this.workers[i].start();
++ }
++ }
++
++ /**
++ * Creates the chunk task manager to work from the global workers. When {@link #close(boolean)} is invoked,
++ * the global queue is not shutdown. If the global workers is configured to be disabled or use 0 threads, then
++ * this chunk task manager will operate off of the world's chunk task queue.
++ * @param world The world that this task manager is responsible for
++ * @see net.minecraft.server.ChunkProviderServer#serverThreadQueue
++ */
++ public ChunkTaskManager(final WorldServer world) {
++ this.world = world;
++ this.workers = globalWorkers;
++ this.queue = globalQueue;
++ this.perWorldQueue = false;
++ }
++
++ /**
++ * The exact same as {@link #scheduleChunkLoad(int, int, int, Consumer, boolean)}, except that the chunk data is provided as
++ * the {@code data} parameter.
++ */
++ public ChunkLoadTask scheduleChunkLoad(final int chunkX, final int chunkZ, final int priority,
++ final Consumer onComplete,
++ final boolean intendingToBlock, final NBTTagCompound data) {
++ final WorldServer world = this.world;
++
++ return this.chunkLoadTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkLoadTask valueInMap) -> {
++ if (valueInMap != null) {
++ throw new IllegalStateException("Double scheduling chunk load");
++ }
++
++ final ChunkLoadTask ret = new ChunkLoadTask(world, chunkX, chunkZ, priority, ChunkTaskManager.this, onComplete);
++
++ ConcreteFileIOThread.Holder.INSTANCE.loadChunkDataAsync(world, chunkX, chunkZ, priority, (final ConcreteFileIOThread.ChunkData chunkData) -> {
++ ret.chunkData = chunkData;
++ chunkData.chunkData = data;
++ ChunkTaskManager.this.internalSchedule(ret); // only schedule to the worker threads here
++ }, true, false, intendingToBlock);
++
++ return ret;
++ });
++ }
++
++ /**
++ * Schedules an asynchronous chunk load for the specified coordinates. The onComplete parameter may be invoked asynchronously
++ * on a worker thread or on the world's chunk executor queue. As such the code that is executed for the parameter should be
++ * carefully chosen.
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param priority Priority for this task
++ * @param onComplete The consumer to invoke with the {@link net.minecraft.server.ChunkRegionLoader.InProgressChunkHolder} object once this task is complete
++ * @param intendingToBlock Whether the caller is intending to block on this task completing (this is a performance tune, and has no adverse side-effects)
++ * @return The {@link ChunkLoadTask} associated with
++ */
++ public ChunkLoadTask scheduleChunkLoad(final int chunkX, final int chunkZ, final int priority,
++ final Consumer onComplete,
++ final boolean intendingToBlock) {
++ final WorldServer world = this.world;
++
++ return this.chunkLoadTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkLoadTask valueInMap) -> {
++ if (valueInMap != null) {
++ throw new IllegalStateException("Double scheduling chunk load");
++ }
++
++ final ChunkLoadTask ret = new ChunkLoadTask(world, chunkX, chunkZ, priority, ChunkTaskManager.this, onComplete);
++
++ ConcreteFileIOThread.Holder.INSTANCE.loadChunkDataAsync(world, chunkX, chunkZ, priority, (final ConcreteFileIOThread.ChunkData chunkData) -> {
++ ret.chunkData = chunkData;
++ ChunkTaskManager.this.internalSchedule(ret); // only schedule to the worker threads here
++ }, true, true, intendingToBlock);
++
++ return ret;
++ });
++ }
++
++ /**
++ * Schedules an async save for the specified chunk. The chunk, at the beginning of this call, must be completely unloaded
++ * from the world.
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param priority Priority for this task
++ * @param asyncSaveData Async save data. See {@link ChunkRegionLoader#getAsyncSaveData(WorldServer, IChunkAccess)}
++ * @param chunk Chunk to save
++ * @return The {@link ChunkSaveTask} associated with the save task.
++ */
++ public ChunkSaveTask scheduleChunkSave(final int chunkX, final int chunkZ, final int priority,
++ final ChunkRegionLoader.AsyncSaveData asyncSaveData,
++ final IChunkAccess chunk) {
++ AsyncCatcher.catchOp("chunk save schedule");
++
++ final WorldServer world = this.world;
++
++ return this.chunkSaveTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkSaveTask valueInMap) -> {
++ if (valueInMap != null) {
++ throw new IllegalStateException("Double scheduling chunk save");
++ }
++
++ final ChunkSaveTask ret = new ChunkSaveTask(world, chunkX, chunkZ, priority, ChunkTaskManager.this, asyncSaveData, chunk);
++
++ ChunkTaskManager.this.internalSchedule(ret);
++
++ return ret;
++ });
++ }
++
++ /**
++ * Returns a completable future which will be completed with the un-copied chunk data for an in progress async save.
++ * Returns {@code null} if no save is in progress.
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ */
++ public CompletableFuture getChunkSaveFuture(final int chunkX, final int chunkZ) {
++ final ChunkSaveTask chunkSaveTask = this.chunkSaveTasks.get(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)));
++ if (chunkSaveTask == null) {
++ return null;
++ }
++ return chunkSaveTask.onComplete;
++ }
++
++ /**
++ * Returns the chunk object being used to serialize data async for an unloaded chunk. Note that modifying this chunk
++ * is not safe to do as another thread is handling its save. The chunk is also not loaded into the world.
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @return Chunk object for an in-progress async save, or {@code null} if no save is in progress
++ */
++ public IChunkAccess getChunkInSaveProgress(final int chunkX, final int chunkZ) {
++ final ChunkSaveTask chunkSaveTask = this.chunkSaveTasks.get(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)));
++ if (chunkSaveTask == null) {
++ return null;
++ }
++ return chunkSaveTask.chunk;
++ }
++
++ public void flush() {
++ // flush here since we schedule tasks on the IO thread that can schedule tasks here
++ ConcreteFileIOThread.Holder.INSTANCE.flush();
++
++ if (this.workers == null) {
++ if (Bukkit.isPrimaryThread()) {
++ ((IAsyncTaskHandler) this.world.getChunkProvider().serverThreadQueue).executeAll();
++ } else {
++ CompletableFuture wait = new CompletableFuture<>();
++ MinecraftServer.getServer().scheduleOnMain(() -> {
++ ((IAsyncTaskHandler) this.world.getChunkProvider().serverThreadQueue).executeAll();
++ });
++ wait.join();
++ }
++ return;
++ }
++
++ for (final QueueExecutorThread worker : this.workers) {
++ worker.flush();
++ }
++
++ // flush again since tasks we execute async saves
++ ConcreteFileIOThread.Holder.INSTANCE.flush();
++ }
++
++ public void close(final boolean wait) {
++ // flush here since we schedule tasks on the IO thread that can schedule tasks to this task manager
++ // we do this regardless of the wait param since after we invoke close no tasks can be queued
++ ConcreteFileIOThread.Holder.INSTANCE.flush();
++
++ if (this.workers == null) {
++ if (wait) {
++ this.flush();
++ }
++ return;
++ }
++
++ if (this.workers != globalWorkers) {
++ for (final QueueExecutorThread worker : this.workers) {
++ worker.close(false, this.perWorldQueue);
++ }
++ }
++
++ if (wait) {
++ this.flush();
++ }
++ }
++
++ public void raisePriority(final int chunkX, final int chunkZ, final int priority) {
++ final Long chunkKey = Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ));
++
++ ChunkSaveTask chunkSaveTask = this.chunkSaveTasks.get(chunkKey);
++ if (chunkSaveTask != null) {
++ chunkSaveTask.raisePriority(priority);
++ if (chunkSaveTask.isScheduled() && chunkSaveTask.getPriority() != PrioritizedTaskQueue.COMPLETING_PRIORITY) {
++ // only notify if we're in queue to be executed
++ this.internalScheduleNotify();
++ }
++ }
++
++ ChunkLoadTask chunkLoadTask = this.chunkLoadTasks.get(chunkKey);
++ if (chunkLoadTask != null) {
++ chunkLoadTask.raisePriority(priority);
++ if (chunkLoadTask.isScheduled() && chunkLoadTask.getPriority() != PrioritizedTaskQueue.COMPLETING_PRIORITY) {
++ // only notify if we're in queue to be executed
++ this.internalScheduleNotify();
++ }
++ }
++ }
++
++ protected void internalSchedule(final ChunkTask task) {
++ if (this.workers == null) {
++ // execute() will execute immediately if we're main
++ ((IAsyncTaskHandler)this.world.getChunkProvider().serverThreadQueue).addTask(task);
++ return;
++ }
++
++ // It's important we order the task to be executed before notifying. Avoid a race condition where the worker thread
++ // wakes up and goes to sleep before we actually schedule (or it's just about to sleep)
++ this.queue.add(task);
++ this.internalScheduleNotify();
++ }
++
++ protected void internalScheduleNotify() {
++ for (final QueueExecutorThread worker : this.workers) {
++ if (worker.notifyTasks()) {
++ // break here since we only want to wake up one worker for scheduling one task
++ break;
++ }
++ }
++ }
++
++}
+diff --git a/src/main/java/net/minecraft/server/ChunkProviderServer.java b/src/main/java/net/minecraft/server/ChunkProviderServer.java
+index 775b5f7fe3..e75e311376 100644
+--- a/src/main/java/net/minecraft/server/ChunkProviderServer.java
++++ b/src/main/java/net/minecraft/server/ChunkProviderServer.java
+@@ -160,11 +160,143 @@ public class ChunkProviderServer extends IChunkProvider {
+ return playerChunk.getAvailableChunkNow();
+
+ }
++
++ private long asyncLoadSeqCounter;
++
++ public void getChunkAtAsynchronously(int x, int z, boolean gen, java.util.function.Consumer onComplete) {
++ if (Thread.currentThread() != this.serverThread) {
++ this.serverThreadQueue.execute(() -> {
++ this.getChunkAtAsynchronously(x, z, gen, onComplete);
++ });
++ return;
++ }
++
++ long k = ChunkCoordIntPair.pair(x, z);
++ ChunkCoordIntPair chunkPos = new ChunkCoordIntPair(x, z);
++
++ IChunkAccess ichunkaccess;
++
++ // try cache
++ for (int l = 0; l < 4; ++l) {
++ if (k == this.cachePos[l] && ChunkStatus.FULL == this.cacheStatus[l]) {
++ ichunkaccess = this.cacheChunk[l];
++ if (ichunkaccess != null) { // CraftBukkit - the chunk can become accessible in the meantime TODO for non-null chunks it might also make sense to check that the chunk's state hasn't changed in the meantime
++
++ // move to first in cache
++
++ for (int i1 = 3; i1 > 0; --i1) {
++ this.cachePos[i1] = this.cachePos[i1 - 1];
++ this.cacheStatus[i1] = this.cacheStatus[i1 - 1];
++ this.cacheChunk[i1] = this.cacheChunk[i1 - 1];
++ }
++
++ this.cachePos[0] = k;
++ this.cacheStatus[0] = ChunkStatus.FULL;
++ this.cacheChunk[0] = ichunkaccess;
++
++ onComplete.accept((Chunk)ichunkaccess);
++
++ return;
++ }
++ }
++ }
++
++ if (gen) {
++ this.bringToFullStatusAsync(x, z, chunkPos, onComplete);
++ return;
++ }
++
++ IChunkAccess current = this.getChunkAtImmediately(x, z); // we want to bypass ticket restrictions
++ if (current != null) {
++ if (!(current instanceof ProtoChunkExtension) && !(current instanceof net.minecraft.server.Chunk)) {
++ onComplete.accept(null); // the chunk is not gen'd
++ return;
++ }
++ // we know the chunk is at full status here (either in read-only mode or the real thing)
++ this.bringToFullStatusAsync(x, z, chunkPos, onComplete);
++ return;
++ } else {
++ // Paper start - async io
++ ChunkStatus status = world.getChunkProvider().playerChunkMap.getStatusOnDiskNoLoad(x, z); // Paper - async io - move to own method
++
++ if (status == ChunkStatus.EMPTY) {
++ // does not exist on disk
++ onComplete.accept(null);
++ return;
++ }
++
++ if (status == ChunkStatus.FULL) {
++ this.bringToFullStatusAsync(x, z, chunkPos, onComplete);
++ return;
++ } else if (status != null) {
++ onComplete.accept(null);
++ return; // not full status on disk
++ }
++ // status is null here
++ // Paper end
++
++ // at this stage we don't know what status the chunk is in
++ }
++
++ // here we don't know what status it is and we're not supposed to generate
++ // so we asynchronously load empty status
++
++ this.bringToStatusAsync(x, z, chunkPos, ChunkStatus.EMPTY, (IChunkAccess chunk) -> {
++ if (!(chunk instanceof ProtoChunkExtension) && !(chunk instanceof net.minecraft.server.Chunk)) {
++ // the chunk on disk was not a full status chunk
++ onComplete.accept(null);
++ return;
++ }
++ this.bringToFullStatusAsync(x, z, chunkPos, onComplete); // bring to full status if required
++ });
++ }
++
++ private void bringToFullStatusAsync(int x, int z, ChunkCoordIntPair chunkPos, java.util.function.Consumer onComplete) {
++ this.bringToStatusAsync(x, z, chunkPos, ChunkStatus.FULL, (java.util.function.Consumer)onComplete);
++ }
++
++ private void bringToStatusAsync(int x, int z, ChunkCoordIntPair chunkPos, ChunkStatus status, java.util.function.Consumer onComplete) {
++ CompletableFuture> future = this.getChunkFutureMainThread(x, z, status, true);
++ long identifier = this.asyncLoadSeqCounter++;
++ int ticketLevel = MCUtil.getTicketLevelFor(status);
++ this.addTicketAtLevel(TicketType.ASYNC_LOAD, chunkPos, ticketLevel, identifier);
++
++ future.whenCompleteAsync((Either either, Throwable throwable) -> {
++ // either left -> success
++ // either right -> failure
++
++ if (throwable != null) {
++ throw new RuntimeException(throwable);
++ }
++
++ this.removeTicketAtLevel(TicketType.ASYNC_LOAD, chunkPos, ticketLevel, identifier);
++ this.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, ticketLevel, chunkPos); // allow unloading
++
++ Optional failure = either.right();
++
++ if (failure.isPresent()) {
++ // failure
++ throw new IllegalStateException("Chunk failed to load: " + failure.get().toString());
++ }
++
++ onComplete.accept(either.left().get());
++
++ }, this.serverThreadQueue);
++ }
++
++ public void addTicketAtLevel(TicketType ticketType, ChunkCoordIntPair chunkPos, int ticketLevel, T identifier) {
++ this.chunkMapDistance.addTicketAtLevel(ticketType, chunkPos, ticketLevel, identifier);
++ }
++
++ public void removeTicketAtLevel(TicketType ticketType, ChunkCoordIntPair chunkPos, int ticketLevel, T identifier) {
++ this.chunkMapDistance.removeTicketAtLevel(ticketType, chunkPos, ticketLevel, identifier);
++ }
+ // Paper end
+
+ @Nullable
+ @Override
+ public IChunkAccess getChunkAt(int i, int j, ChunkStatus chunkstatus, boolean flag) {
++ final int x = i; final int z = j; // Paper - conflict on variable change
+ if (Thread.currentThread() != this.serverThread) {
+ return (IChunkAccess) CompletableFuture.supplyAsync(() -> {
+ return this.getChunkAt(i, j, chunkstatus, flag);
+@@ -186,6 +318,9 @@ public class ChunkProviderServer extends IChunkProvider {
+ CompletableFuture> completablefuture = this.getChunkFutureMainThread(i, j, chunkstatus, flag);
+
+ if (!completablefuture.isDone()) { // Paper
++ // Paper start - async chunk io // Paper start - async chunk loading
++ this.world.asyncChunkTaskManager.raisePriority(x, z, com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGHEST_PRIORITY);
++ // Paper end
+ this.world.timings.chunkAwait.startTiming(); // Paper
+ this.serverThreadQueue.awaitTasks(completablefuture::isDone);
+ this.world.timings.chunkAwait.stopTiming(); // Paper
+diff --git a/src/main/java/net/minecraft/server/ChunkRegionLoader.java b/src/main/java/net/minecraft/server/ChunkRegionLoader.java
+index a028074112..61157b5dd4 100644
+--- a/src/main/java/net/minecraft/server/ChunkRegionLoader.java
++++ b/src/main/java/net/minecraft/server/ChunkRegionLoader.java
+@@ -6,6 +6,7 @@ import it.unimi.dsi.fastutil.longs.LongOpenHashSet;
+ import it.unimi.dsi.fastutil.longs.LongSet;
+ import it.unimi.dsi.fastutil.shorts.ShortList;
+ import it.unimi.dsi.fastutil.shorts.ShortListIterator;
++import java.util.ArrayDeque; // Paper
+ import java.util.Arrays;
+ import java.util.BitSet;
+ import java.util.EnumSet;
+@@ -22,7 +23,29 @@ public class ChunkRegionLoader {
+
+ private static final Logger LOGGER = LogManager.getLogger();
+
++ // Paper start
++ public static final class InProgressChunkHolder {
++
++ public final ProtoChunk protoChunk;
++ public final ArrayDeque tasks;
++
++ public NBTTagCompound poiData;
++
++ public InProgressChunkHolder(final ProtoChunk protoChunk, final ArrayDeque tasks) {
++ this.protoChunk = protoChunk;
++ this.tasks = tasks;
++ }
++ }
++
+ public static ProtoChunk loadChunk(WorldServer worldserver, DefinedStructureManager definedstructuremanager, VillagePlace villageplace, ChunkCoordIntPair chunkcoordintpair, NBTTagCompound nbttagcompound) {
++ InProgressChunkHolder holder = loadChunk(worldserver, definedstructuremanager, villageplace, chunkcoordintpair, nbttagcompound, true);
++ holder.tasks.forEach(Runnable::run);
++ return holder.protoChunk;
++ }
++
++ public static InProgressChunkHolder loadChunk(WorldServer worldserver, DefinedStructureManager definedstructuremanager, VillagePlace villageplace, ChunkCoordIntPair chunkcoordintpair, NBTTagCompound nbttagcompound, boolean distinguish) {
++ ArrayDeque tasksToExecuteOnMain = new ArrayDeque<>();
++ // Paper end
+ ChunkGenerator> chunkgenerator = worldserver.getChunkProvider().getChunkGenerator();
+ WorldChunkManager worldchunkmanager = chunkgenerator.getWorldChunkManager();
+ NBTTagCompound nbttagcompound1 = nbttagcompound.getCompound("Level");
+@@ -66,7 +89,9 @@ public class ChunkRegionLoader {
+ LightEngine lightengine = chunkproviderserver.getLightEngine();
+
+ if (flag) {
+- lightengine.b(chunkcoordintpair, true);
++ tasksToExecuteOnMain.add(() -> { // Paper - delay this task since we're executing off-main
++ lightengine.b(chunkcoordintpair, true);
++ }); // Paper - delay this task since we're executing off-main
+ }
+
+ for (int k = 0; k < nbttaglist.size(); ++k) {
+@@ -82,16 +107,30 @@ public class ChunkRegionLoader {
+ achunksection[b0] = chunksection;
+ }
+
+- villageplace.a(chunkcoordintpair, chunksection);
++ tasksToExecuteOnMain.add(() -> { // Paper - delay this task since we're executing off-main
++ villageplace.a(chunkcoordintpair, chunksection);
++ }); // Paper - delay this task since we're executing off-main
+ }
+
+ if (flag) {
+ if (nbttagcompound2.hasKeyOfType("BlockLight", 7)) {
+- lightengine.a(EnumSkyBlock.BLOCK, SectionPosition.a(chunkcoordintpair, b0), new NibbleArray(nbttagcompound2.getByteArray("BlockLight")));
++ // Paper start - delay this task since we're executing off-main
++ NibbleArray blockLight = new NibbleArray(nbttagcompound2.getByteArray("BlockLight"));
++ // Note: We move the block light nibble array creation here for perf & in case the compound is modified
++ tasksToExecuteOnMain.add(() -> {
++ lightengine.a(EnumSkyBlock.BLOCK, SectionPosition.a(chunkcoordintpair, b0), blockLight);
++ });
++ // Paper end
+ }
+
+ if (flag2 && nbttagcompound2.hasKeyOfType("SkyLight", 7)) {
+- lightengine.a(EnumSkyBlock.SKY, SectionPosition.a(chunkcoordintpair, b0), new NibbleArray(nbttagcompound2.getByteArray("SkyLight")));
++ // Paper start - delay this task since we're executing off-main
++ NibbleArray skyLight = new NibbleArray(nbttagcompound2.getByteArray("SkyLight"));
++ // Note: We move the block light nibble array creation here for perf & in case the compound is modified
++ tasksToExecuteOnMain.add(() -> {
++ lightengine.a(EnumSkyBlock.SKY, SectionPosition.a(chunkcoordintpair, b0), skyLight);
++ });
++ // Paper end
+ }
+ }
+ }
+@@ -194,7 +233,7 @@ public class ChunkRegionLoader {
+ }
+
+ if (chunkstatus_type == ChunkStatus.Type.LEVELCHUNK) {
+- return new ProtoChunkExtension((Chunk) object);
++ return new InProgressChunkHolder(new ProtoChunkExtension((Chunk) object), tasksToExecuteOnMain); // Paper - Async chunk loading
+ } else {
+ ProtoChunk protochunk1 = (ProtoChunk) object;
+
+@@ -233,11 +272,83 @@ public class ChunkRegionLoader {
+ protochunk1.a(worldgenstage_features, BitSet.valueOf(nbttagcompound5.getByteArray(s1)));
+ }
+
+- return protochunk1;
++ return new InProgressChunkHolder(protochunk1, tasksToExecuteOnMain); // Paper - Async chunk loading
+ }
+ }
+
++ // Paper start - async chunk save for unload
++ public static final class AsyncSaveData {
++ public final NibbleArray[] blockLight; // null or size of 17 (for indices -1 through 15)
++ public final NibbleArray[] skyLight;
++
++ public final NBTTagList blockTickList; // non-null if we had to go to the server's tick list
++ public final NBTTagList fluidTickList; // non-null if we had to go to the server's tick list
++
++ public final long worldTime;
++
++ public AsyncSaveData(NibbleArray[] blockLight, NibbleArray[] skyLight, NBTTagList blockTickList, NBTTagList fluidTickList,
++ long worldTime) {
++ this.blockLight = blockLight;
++ this.skyLight = skyLight;
++ this.blockTickList = blockTickList;
++ this.fluidTickList = fluidTickList;
++ this.worldTime = worldTime;
++ }
++ }
++
++ // must be called sync
++ public static AsyncSaveData getAsyncSaveData(WorldServer world, IChunkAccess chunk) {
++ org.spigotmc.AsyncCatcher.catchOp("preparation of chunk data for async save");
++ ChunkCoordIntPair chunkPos = chunk.getPos();
++
++ LightEngineThreaded lightenginethreaded = world.getChunkProvider().getLightEngine();
++
++ NibbleArray[] blockLight = new NibbleArray[17 - (-1)];
++ NibbleArray[] skyLight = new NibbleArray[17 - (-1)];
++
++ for (int i = -1; i < 17; ++i) {
++ NibbleArray blockArray = lightenginethreaded.a(EnumSkyBlock.BLOCK).a(SectionPosition.a(chunkPos, i)); // TODO obfhelpers
++ NibbleArray skyArray = lightenginethreaded.a(EnumSkyBlock.SKY).a(SectionPosition.a(chunkPos, i)); // TODO obfhelpers
++
++ // copy data for safety
++ if (blockArray != null) {
++ blockArray = blockArray.copy();
++ }
++ if (skyArray != null) {
++ skyArray = skyArray.copy();
++ }
++
++ // apply offset of 1 for -1 starting index
++ blockLight[i + 1] = blockArray;
++ skyLight[i + 1] = skyArray;
++ }
++
++ TickList blockTickList = chunk.n(); // TODO obfhelper
++
++ NBTTagList blockTickListSerialized;
++ if (blockTickList instanceof ProtoChunkTickList || blockTickList instanceof TickListChunk) {
++ blockTickListSerialized = null;
++ } else {
++ blockTickListSerialized = world.getBlockTickList().a(chunkPos); // TODO obfhelper
++ }
++
++ TickList fluidTickList = chunk.o(); // TODO obfhelper
++
++ NBTTagList fluidTickListSerialized;
++ if (fluidTickList instanceof ProtoChunkTickList || fluidTickList instanceof TickListChunk) {
++ fluidTickListSerialized = null;
++ } else {
++ fluidTickListSerialized = world.getFluidTickList().a(chunkPos); // TODO obfhelper
++ }
++
++ return new AsyncSaveData(blockLight, skyLight, blockTickListSerialized, fluidTickListSerialized, world.getTime());
++ }
++
+ public static NBTTagCompound saveChunk(WorldServer worldserver, IChunkAccess ichunkaccess) {
++ return saveChunk(worldserver, ichunkaccess, null);
++ }
++ public static NBTTagCompound saveChunk(WorldServer worldserver, IChunkAccess ichunkaccess, AsyncSaveData asyncsavedata) {
++ // Paper end
+ ChunkCoordIntPair chunkcoordintpair = ichunkaccess.getPos();
+ NBTTagCompound nbttagcompound = new NBTTagCompound();
+ NBTTagCompound nbttagcompound1 = new NBTTagCompound();
+@@ -246,7 +357,7 @@ public class ChunkRegionLoader {
+ nbttagcompound.set("Level", nbttagcompound1);
+ nbttagcompound1.setInt("xPos", chunkcoordintpair.x);
+ nbttagcompound1.setInt("zPos", chunkcoordintpair.z);
+- nbttagcompound1.setLong("LastUpdate", worldserver.getTime());
++ nbttagcompound1.setLong("LastUpdate", asyncsavedata != null ? asyncsavedata.worldTime : worldserver.getTime()); // Paper - async chunk unloading
+ nbttagcompound1.setLong("InhabitedTime", ichunkaccess.q());
+ nbttagcompound1.setString("Status", ichunkaccess.getChunkStatus().d());
+ ChunkConverter chunkconverter = ichunkaccess.p();
+@@ -262,14 +373,22 @@ public class ChunkRegionLoader {
+
+ NBTTagCompound nbttagcompound2;
+
+- for (int i = -1; i < 17; ++i) {
++ for (int i = -1; i < 17; ++i) { // Paper - conflict on loop parameter change
+ int finalI = i;
+ ChunkSection chunksection = (ChunkSection) Arrays.stream(achunksection).filter((chunksection1) -> {
+ return chunksection1 != null && chunksection1.getYPosition() >> 4 == finalI;
+ }).findFirst().orElse(Chunk.a);
+- NibbleArray nibblearray = lightenginethreaded.a(EnumSkyBlock.BLOCK).a(SectionPosition.a(chunkcoordintpair, i));
+- NibbleArray nibblearray1 = lightenginethreaded.a(EnumSkyBlock.SKY).a(SectionPosition.a(chunkcoordintpair, i));
+-
++ // Paper start - async chunk save for unload
++ NibbleArray nibblearray; // block light
++ NibbleArray nibblearray1; // sky light
++ if (asyncsavedata == null) {
++ nibblearray = lightenginethreaded.a(EnumSkyBlock.BLOCK).a(SectionPosition.a(chunkcoordintpair, i));
++ nibblearray1 = lightenginethreaded.a(EnumSkyBlock.SKY).a(SectionPosition.a(chunkcoordintpair, i));
++ } else {
++ nibblearray = asyncsavedata.blockLight[i + 1]; // +1 to offset the -1 starting index
++ nibblearray1 = asyncsavedata.skyLight[i + 1]; // +1 to offset the -1 starting index
++ }
++ // Paper end
+ if (chunksection != Chunk.a || nibblearray != null || nibblearray1 != null) {
+ nbttagcompound2 = new NBTTagCompound();
+ nbttagcompound2.setByte("Y", (byte) (i & 255));
+@@ -334,10 +453,10 @@ public class ChunkRegionLoader {
+ // Paper start
+ if ((int)Math.floor(entity.locX) >> 4 != chunk.getPos().x || (int)Math.floor(entity.locZ) >> 4 != chunk.getPos().z) {
+ LogManager.getLogger().warn(entity + " is not in this chunk, skipping save. This a bug fix to a vanilla bug. Do not report this to PaperMC please.");
+- toUpdate.add(entity);
++ if (asyncsavedata == null) toUpdate.add(entity); // todo fix this broken code, entityJoinedWorld wont work in this case!
+ continue;
+ }
+- if (entity.dead) {
++ if (asyncsavedata == null && entity.dead) { // todo
+ continue;
+ }
+ // Paper end
+@@ -378,7 +497,11 @@ public class ChunkRegionLoader {
+ if (ticklist instanceof ProtoChunkTickList) {
+ nbttagcompound1.set("ToBeTicked", ((ProtoChunkTickList) ticklist).b());
+ } else if (ticklist instanceof TickListChunk) {
+- nbttagcompound1.set("TileTicks", ((TickListChunk) ticklist).a(worldserver.getTime()));
++ nbttagcompound1.set("TileTicks", ((TickListChunk) ticklist).a(asyncsavedata != null ? asyncsavedata.worldTime : worldserver.getTime())); // Paper - async chunk unloading
++ // Paper start - async chunk save for unload
++ } else if (asyncsavedata != null) {
++ nbttagcompound1.set("TileTicks", asyncsavedata.blockTickList);
++ // Paper end
+ } else {
+ nbttagcompound1.set("TileTicks", worldserver.getBlockTickList().a(chunkcoordintpair));
+ }
+@@ -388,7 +511,11 @@ public class ChunkRegionLoader {
+ if (ticklist1 instanceof ProtoChunkTickList) {
+ nbttagcompound1.set("LiquidsToBeTicked", ((ProtoChunkTickList) ticklist1).b());
+ } else if (ticklist1 instanceof TickListChunk) {
+- nbttagcompound1.set("LiquidTicks", ((TickListChunk) ticklist1).a(worldserver.getTime()));
++ nbttagcompound1.set("LiquidTicks", ((TickListChunk) ticklist1).a(asyncsavedata != null ? asyncsavedata.worldTime : worldserver.getTime())); // Paper - async chunk unloading
++ // Paper start - async chunk save for unload
++ } else if (asyncsavedata != null) {
++ nbttagcompound1.set("LiquidTicks", asyncsavedata.fluidTickList);
++ // Paper end
+ } else {
+ nbttagcompound1.set("LiquidTicks", worldserver.getFluidTickList().a(chunkcoordintpair));
+ }
+diff --git a/src/main/java/net/minecraft/server/ChunkStatus.java b/src/main/java/net/minecraft/server/ChunkStatus.java
+index e324989b46..abb0d69d2f 100644
+--- a/src/main/java/net/minecraft/server/ChunkStatus.java
++++ b/src/main/java/net/minecraft/server/ChunkStatus.java
+@@ -153,6 +153,7 @@ public class ChunkStatus {
+ return ChunkStatus.q.size();
+ }
+
++ public static int getTicketLevelOffset(ChunkStatus status) { return ChunkStatus.a(status); } // Paper - OBFHELPER
+ public static int a(ChunkStatus chunkstatus) {
+ return ChunkStatus.r.getInt(chunkstatus.c());
+ }
+diff --git a/src/main/java/net/minecraft/server/IAsyncTaskHandler.java b/src/main/java/net/minecraft/server/IAsyncTaskHandler.java
+index d521d25cf5..84024e6ba4 100644
+--- a/src/main/java/net/minecraft/server/IAsyncTaskHandler.java
++++ b/src/main/java/net/minecraft/server/IAsyncTaskHandler.java
+@@ -91,7 +91,7 @@ public abstract class IAsyncTaskHandler implements Mailbox public
+ while (this.executeNext()) {
+ ;
+ }
+diff --git a/src/main/java/net/minecraft/server/IChunkLoader.java b/src/main/java/net/minecraft/server/IChunkLoader.java
+index 3f14392e6e..cc933ec067 100644
+--- a/src/main/java/net/minecraft/server/IChunkLoader.java
++++ b/src/main/java/net/minecraft/server/IChunkLoader.java
+@@ -3,6 +3,10 @@ package net.minecraft.server;
+ import com.mojang.datafixers.DataFixer;
+ import java.io.File;
+ import java.io.IOException;
++// Paper start
++import java.util.concurrent.CompletableFuture;
++import java.util.concurrent.CompletionException;
++// Paper end
+ import java.util.function.Supplier;
+ import javax.annotation.Nullable;
+
+@@ -10,7 +14,9 @@ public class IChunkLoader extends RegionFileCache {
+
+ protected final DataFixer b;
+ @Nullable
+- private PersistentStructureLegacy a;
++ private volatile PersistentStructureLegacy a; // Paper - async chunk loading
++
++ private final Object persistentDataLock = new Object(); // Paper
+
+ public IChunkLoader(File file, DataFixer datafixer) {
+ super(file);
+@@ -55,9 +61,26 @@ public class IChunkLoader extends RegionFileCache {
+ NBTTagCompound level = nbttagcompound.getCompound("Level");
+ if (level.getBoolean("TerrainPopulated") && !level.getBoolean("LightPopulated")) {
+ ChunkProviderServer cps = (generatoraccess == null) ? null : ((WorldServer) generatoraccess).getChunkProvider();
++ // Paper start - Async chunk loading
++ CompletableFuture future = new CompletableFuture<>();
++ MCUtil.ensureMain((Runnable)() -> {
++ try {
++ // Paper end
+ if (check(cps, pos.x - 1, pos.z) && check(cps, pos.x - 1, pos.z - 1) && check(cps, pos.x, pos.z - 1)) {
+ level.setBoolean("LightPopulated", true);
+ }
++ // Paper start - Async chunk loading
++ future.complete(null);
++ } catch (IOException ex) {
++ future.completeExceptionally(ex);
++ }
++ });
++ try {
++ future.join();
++ } catch (CompletionException ex) {
++ com.destroystokyo.paper.util.SneakyThrow.sneaky(ex.getCause());
++ }
++ // Paper end
+ }
+ }
+ // CraftBukkit end
+@@ -65,11 +88,13 @@ public class IChunkLoader extends RegionFileCache {
+ if (i < 1493) {
+ nbttagcompound = GameProfileSerializer.a(this.b, DataFixTypes.CHUNK, nbttagcompound, i, 1493);
+ if (nbttagcompound.getCompound("Level").getBoolean("hasLegacyStructureData")) {
++ synchronized (this.persistentDataLock) { // Paper - Async chunk loading
+ if (this.a == null) {
+ this.a = PersistentStructureLegacy.a(dimensionmanager.getType(), (WorldPersistentData) supplier.get()); // CraftBukkit - getType
+ }
+
+ nbttagcompound = this.a.a(nbttagcompound);
++ } // Paper - Async chunk loading
+ }
+ }
+
+@@ -89,7 +114,9 @@ public class IChunkLoader extends RegionFileCache {
+ public void write(ChunkCoordIntPair chunkcoordintpair, NBTTagCompound nbttagcompound) throws IOException {
+ super.write(chunkcoordintpair, nbttagcompound);
+ if (this.a != null) {
++ synchronized (this.persistentDataLock) { // Paper - Async chunk loading
+ this.a.a(chunkcoordintpair.pair());
++ } // Paper - Async chunk loading
+ }
+
+ }
+diff --git a/src/main/java/net/minecraft/server/MCUtil.java b/src/main/java/net/minecraft/server/MCUtil.java
+index 23d1935dd5..14f8b61042 100644
+--- a/src/main/java/net/minecraft/server/MCUtil.java
++++ b/src/main/java/net/minecraft/server/MCUtil.java
+@@ -530,4 +530,9 @@ public final class MCUtil {
+ out.print(fileData);
+ }
+ }
++
++ public static int getTicketLevelFor(ChunkStatus status) {
++ // TODO make sure the constant `33` is correct on future updates. See getChunkAt(int, int, ChunkStatus, boolean)
++ return 33 + ChunkStatus.getTicketLevelOffset(status);
++ }
+ }
+diff --git a/src/main/java/net/minecraft/server/MinecraftServer.java b/src/main/java/net/minecraft/server/MinecraftServer.java
+index a45db15075..c5e89790fc 100644
+--- a/src/main/java/net/minecraft/server/MinecraftServer.java
++++ b/src/main/java/net/minecraft/server/MinecraftServer.java
+@@ -776,6 +776,7 @@ public abstract class MinecraftServer extends IAsyncTaskHandlerReentrant executor;
+ public final ChunkGenerator> chunkGenerator;
+- private final Supplier m;
++ private final Supplier m; public final Supplier getWorldPersistentDataSupplier() { return this.m; } // Paper - OBFHELPER
+ private final VillagePlace n;
+ public final LongSet unloadQueue;
+ private boolean updatingChunksModified;
+@@ -72,7 +72,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ public final WorldLoadListener worldLoadListener;
+ public final PlayerChunkMap.a chunkDistanceManager; public final PlayerChunkMap.a getChunkMapDistanceManager() { return this.chunkDistanceManager; } // Paper - OBFHELPER
+ private final AtomicInteger v;
+- private final DefinedStructureManager definedStructureManager;
++ public final DefinedStructureManager definedStructureManager; // Paper - private -> public
+ private final File x;
+ private final PlayerMap playerMap;
+ public final Int2ObjectMap trackedEntities;
+@@ -133,7 +133,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ this.lightEngine = new LightEngineThreaded(ilightaccess, this, this.world.getWorldProvider().g(), threadedmailbox1, this.q.a(threadedmailbox1, false));
+ this.chunkDistanceManager = new PlayerChunkMap.a(executor, iasynctaskhandler);
+ this.m = supplier;
+- this.n = new VillagePlace(new File(this.x, "poi"), datafixer);
++ this.n = new VillagePlace(new File(this.x, "poi"), datafixer, this.world); // Paper
+ this.setViewDistance(i);
+ }
+
+@@ -293,6 +293,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ @Override
+ public void close() throws IOException {
+ this.q.close();
++ this.world.asyncChunkTaskManager.close(true); // Paper - Required since we're closing regionfiles in the next line
+ this.n.close();
+ super.close();
+ }
+@@ -340,7 +341,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ shouldSave = ((Chunk) ichunkaccess).lastSaved + world.paperConfig.autoSavePeriod <= world.getTime();
+ }
+
+- if (shouldSave && this.saveChunk(ichunkaccess)) {
++ if (shouldSave && this.saveChunk(ichunkaccess, true)) { // Paper - async chunk io
+ ++savedThisTick;
+ playerchunk.m();
+ }
+@@ -360,11 +361,15 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ protected void unloadChunks(BooleanSupplier booleansupplier) {
+ GameProfilerFiller gameprofilerfiller = this.world.getMethodProfiler();
+
++ try (Timing ignored = this.world.timings.poiUnload.startTiming()) { // Paper
+ gameprofilerfiller.enter("poi");
+ this.n.a(booleansupplier);
++ } // Paper
+ gameprofilerfiller.exitEnter("chunk_unload");
+ if (!this.world.isSavingDisabled()) {
++ try (Timing ignored = this.world.timings.chunkUnload.startTiming()) { // Paper
+ this.b(booleansupplier);
++ }// Paper
+ }
+
+ gameprofilerfiller.exit();
+@@ -405,6 +410,60 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+
+ }
+
++ // Paper start - async chunk save for unload
++ // Note: This is very unsafe to call if the chunk is still in use.
++ // This is also modeled after PlayerChunkMap#saveChunk(IChunkAccess, boolean), with the intentional difference being
++ // serializing the chunk is left to a worker thread.
++ private void asyncSave(IChunkAccess chunk) {
++ ChunkCoordIntPair chunkPos = chunk.getPos();
++ NBTTagCompound poiData;
++ try (Timing ignored = this.world.timings.chunkUnloadPOISerialization.startTiming()) {
++ poiData = this.getVillagePlace().getData(chunk.getPos());
++ }
++
++ com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE.scheduleSave(this.world, chunkPos.x, chunkPos.z,
++ poiData, null, com.destroystokyo.paper.io.PrioritizedTaskQueue.LOW_PRIORITY);
++
++ if (!chunk.isNeedsSaving()) {
++ return;
++ }
++
++ ChunkStatus chunkstatus = chunk.getChunkStatus();
++
++ // Copied from PlayerChunkMap#saveChunk(IChunkAccess, boolean)
++ if (chunkstatus.getType() != ChunkStatus.Type.LEVELCHUNK) {
++ try (co.aikar.timings.Timing ignored1 = this.world.timings.chunkSaveOverwriteCheck.startTiming()) { // Paper
++ // Paper start - Optimize save by using status cache
++ try {
++ ChunkStatus statusOnDisk = this.getChunkStatusOnDisk(chunkPos);
++ if (statusOnDisk != null && statusOnDisk.getType() == ChunkStatus.Type.LEVELCHUNK) {
++ // Paper end
++ return;
++ }
++
++ if (chunkstatus == ChunkStatus.EMPTY && chunk.h().values().stream().noneMatch(StructureStart::e)) {
++ return;
++ }
++ } catch (IOException ex) {
++ ex.printStackTrace(); // TODO async this
++ return;
++ }
++ }
++ }
++
++ ChunkRegionLoader.AsyncSaveData asyncSaveData;
++ try (Timing ignored = this.world.timings.chunkUnloadPrepareSave.startTiming()) {
++ asyncSaveData = ChunkRegionLoader.getAsyncSaveData(this.world, chunk);
++ }
++
++ this.world.asyncChunkTaskManager.scheduleChunkSave(chunkPos.x, chunkPos.z, com.destroystokyo.paper.io.PrioritizedTaskQueue.LOW_PRIORITY,
++ asyncSaveData, chunk);
++
++ chunk.setLastSaved(this.world.getTime());
++ chunk.setNeedsSaving(false);
++ }
++ // Paper end
++
+ private void a(long i, PlayerChunk playerchunk) {
+ CompletableFuture completablefuture = playerchunk.getChunkSave();
+ Consumer consumer = (ichunkaccess) -> { // CraftBukkit - decompile error
+@@ -418,13 +477,20 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ ((Chunk) ichunkaccess).setLoaded(false);
+ }
+
+- this.saveChunk(ichunkaccess);
++ //this.saveChunk(ichunkaccess);// Paper - delay
+ if (this.loadedChunks.remove(i) && ichunkaccess instanceof Chunk) {
+ Chunk chunk = (Chunk) ichunkaccess;
+
+ this.world.unloadChunk(chunk);
+ }
+
++ try {
++ this.asyncSave(ichunkaccess); // Paper - async chunk saving
++ } catch (Throwable ex) {
++ LOGGER.fatal("Failed to prepare async save, attempting synchronous save", ex);
++ this.saveChunk(ichunkaccess, true);
++ }
++
+ this.lightEngine.a(ichunkaccess.getPos());
+ this.lightEngine.queueUpdate();
+ this.worldLoadListener.a(ichunkaccess.getPos(), (ChunkStatus) null);
+@@ -494,26 +560,30 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ }
+ }
+
++ // Paper start - Async chunk io
++ public NBTTagCompound completeChunkData(NBTTagCompound compound, ChunkCoordIntPair chunkcoordintpair) throws IOException {
++ return compound == null ? null : this.getChunkData(this.world.getWorldProvider().getDimensionManager(), this.getWorldPersistentDataSupplier(), compound, chunkcoordintpair, this.world);
++ }
++ // Paper end
++
+ private CompletableFuture> f(ChunkCoordIntPair chunkcoordintpair) {
+- return CompletableFuture.supplyAsync(() -> {
++ // Paper start - Async chunk io
++ final java.util.function.BiFunction> syncLoadComplete = (chunkHolder, ioThrowable) -> {
+ try (Timing ignored = this.world.timings.syncChunkLoadTimer.startTimingIfSync()) { // Paper
+- NBTTagCompound nbttagcompound; // Paper
+- try (Timing ignored2 = this.world.timings.chunkIOStage1.startTimingIfSync()) { // Paper
+- nbttagcompound = this.readChunkData(chunkcoordintpair);
++ if (ioThrowable != null) {
++ com.destroystokyo.paper.io.IOUtil.rethrow(ioThrowable);
+ }
++ this.getVillagePlace().loadInData(chunkcoordintpair, chunkHolder.poiData);
++ chunkHolder.tasks.forEach(Runnable::run);
++ // Paper - async load completes this
++ // Paper end
+
+- if (nbttagcompound != null) {
+- boolean flag = nbttagcompound.hasKeyOfType("Level", 10) && nbttagcompound.getCompound("Level").hasKeyOfType("Status", 8);
+-
+- if (flag) {
+- ProtoChunk protochunk = ChunkRegionLoader.loadChunk(this.world, this.definedStructureManager, this.n, chunkcoordintpair, nbttagcompound);
+-
+- protochunk.setLastSaved(this.world.getTime());
+- return Either.left(protochunk);
+- }
+-
+- PlayerChunkMap.LOGGER.error("Chunk file at {} is missing level data, skipping", chunkcoordintpair);
++ // Paper start - This is done async
++ if (chunkHolder.protoChunk != null) {
++ chunkHolder.protoChunk.setLastSaved(this.world.getTime());
++ return Either.left(chunkHolder.protoChunk);
+ }
++ // Paper end
+ } catch (ReportedException reportedexception) {
+ Throwable throwable = reportedexception.getCause();
+
+@@ -527,7 +597,35 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ }
+
+ return Either.left(new ProtoChunk(chunkcoordintpair, ChunkConverter.a, this.world)); // Paper - Anti-Xray
+- }, this.executor);
++ // Paper start - Async chunk io
++ };
++ CompletableFuture> ret = new CompletableFuture<>();
++
++ Consumer chunkHolderConsumer = (ChunkRegionLoader.InProgressChunkHolder holder) -> {
++ PlayerChunkMap.this.executor.addTask(() -> {
++ ret.complete(syncLoadComplete.apply(holder, null));
++ });
++ };
++
++ CompletableFuture chunkSaveFuture = this.world.asyncChunkTaskManager.getChunkSaveFuture(chunkcoordintpair.x, chunkcoordintpair.z);
++ if (chunkSaveFuture != null) {
++ this.world.asyncChunkTaskManager.raisePriority(chunkcoordintpair.x, chunkcoordintpair.z, com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGH_PRIORITY);
++ chunkSaveFuture.thenAccept((NBTTagCompound compound) -> {
++ if (compound == com.destroystokyo.paper.io.ConcreteFileIOThread.FAILURE_VALUE) {
++ // serialization failed, we have no choice but to load data from disk
++ this.world.asyncChunkTaskManager.scheduleChunkLoad(chunkcoordintpair.x, chunkcoordintpair.z,
++ com.destroystokyo.paper.io.PrioritizedTaskQueue.NORMAL_PRIORITY, chunkHolderConsumer, false);
++ } else {
++ this.world.asyncChunkTaskManager.scheduleChunkLoad(chunkcoordintpair.x, chunkcoordintpair.z,
++ com.destroystokyo.paper.io.PrioritizedTaskQueue.NORMAL_PRIORITY, chunkHolderConsumer, false, compound.clone()); // clone for safety
++ }
++ });
++ } else {
++ this.world.asyncChunkTaskManager.scheduleChunkLoad(chunkcoordintpair.x, chunkcoordintpair.z,
++ com.destroystokyo.paper.io.PrioritizedTaskQueue.NORMAL_PRIORITY, chunkHolderConsumer, false);
++ }
++ return ret;
++ // Paper end
+ }
+
+ private CompletableFuture> b(PlayerChunk playerchunk, ChunkStatus chunkstatus) {
+@@ -733,18 +831,43 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ return this.v.get();
+ }
+
++ // Paper start - async chunk io
++ private boolean writeDataAsync(ChunkCoordIntPair chunkPos, NBTTagCompound poiData, NBTTagCompound chunkData, boolean async) {
++ com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE.scheduleSave(this.world, chunkPos.x, chunkPos.z,
++ poiData, chunkData, !async ? com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGHEST_PRIORITY : com.destroystokyo.paper.io.PrioritizedTaskQueue.LOW_PRIORITY);
++
++ if (async) {
++ return true;
++ }
++
++ try (co.aikar.timings.Timing ignored = this.world.timings.chunkSaveIOWait.startTiming()) { // Paper
++ Boolean successPoi = com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE.waitForIOToComplete(this.world, chunkPos.x, chunkPos.z, true, true);
++ Boolean successChunk = com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE.waitForIOToComplete(this.world, chunkPos.x, chunkPos.z, true, false);
++
++ if (successPoi == Boolean.FALSE || successChunk == Boolean.FALSE) {
++ return false;
++ }
++
++ // null indicates no task existed, which means our write completed before we waited on it
++
++ return true;
++ } // Paper
++ }
++ // Paper end
++
+ public boolean saveChunk(IChunkAccess ichunkaccess) {
+- this.n.a(ichunkaccess.getPos());
++ // Paper start - async param
++ return this.saveChunk(ichunkaccess, false);
++ }
++ public boolean saveChunk(IChunkAccess ichunkaccess, boolean async) {
++ try (co.aikar.timings.Timing ignored = this.world.timings.chunkSave.startTiming()) {
++ NBTTagCompound poiData = this.getVillagePlace().getData(ichunkaccess.getPos()); // Paper
++ //this.n.a(ichunkaccess.getPos()); // Delay
++ // Paper end
+ if (!ichunkaccess.isNeedsSaving()) {
+ return false;
+ } else {
+- try {
+- this.world.checkSession();
+- } catch (ExceptionWorldConflict exceptionworldconflict) {
+- PlayerChunkMap.LOGGER.error("Couldn't save chunk; already in use by another instance of Minecraft?", exceptionworldconflict);
+- com.destroystokyo.paper.exception.ServerInternalException.reportInternalException(exceptionworldconflict); // Paper
+- return false;
+- }
++ // Paper - The save session check is performed on the IO thread
+
+ ichunkaccess.setLastSaved(this.world.getTime());
+ ichunkaccess.setNeedsSaving(false);
+@@ -755,27 +878,33 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ NBTTagCompound nbttagcompound;
+
+ if (chunkstatus.getType() != ChunkStatus.Type.LEVELCHUNK) {
++ try (co.aikar.timings.Timing ignored1 = this.world.timings.chunkSaveOverwriteCheck.startTiming()) { // Paper
+ // Paper start - Optimize save by using status cache
+ ChunkStatus statusOnDisk = this.getChunkStatusOnDisk(chunkcoordintpair);
+ if (statusOnDisk != null && statusOnDisk.getType() == ChunkStatus.Type.LEVELCHUNK) {
+ // Paper end
++ this.writeDataAsync(ichunkaccess.getPos(), poiData, null, async); // Paper - Async chunk io
+ return false;
+ }
+
+ if (chunkstatus == ChunkStatus.EMPTY && ichunkaccess.h().values().stream().noneMatch(StructureStart::e)) {
++ this.writeDataAsync(ichunkaccess.getPos(), poiData, null, async); // Paper - Async chunk io
+ return false;
+ }
+ }
+-
++ } // Paper
++ try (co.aikar.timings.Timing ignored1 = this.world.timings.chunkSaveDataSerialization.startTiming()) { // Paper
+ nbttagcompound = ChunkRegionLoader.saveChunk(this.world, ichunkaccess);
+- this.write(chunkcoordintpair, nbttagcompound);
+- return true;
++ } // Paper
++ return this.writeDataAsync(ichunkaccess.getPos(), poiData, nbttagcompound, async); // Paper - Async chunk io
++ //return true; // Paper
+ } catch (Exception exception) {
+ PlayerChunkMap.LOGGER.error("Failed to save chunk {},{}", chunkcoordintpair.x, chunkcoordintpair.z, exception);
+ com.destroystokyo.paper.exception.ServerInternalException.reportInternalException(exception); // Paper
+ return false;
+ }
+ }
++ } // Paper
+ }
+
+ protected void setViewDistance(int i) {
+@@ -879,6 +1008,42 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ }
+ }
+
++ // Paper start - Asynchronous chunk io
++ @Nullable
++ @Override
++ public NBTTagCompound read(ChunkCoordIntPair chunkcoordintpair) throws IOException {
++ if (Thread.currentThread() != com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE) {
++ NBTTagCompound ret = com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE
++ .loadChunkDataAsyncFuture(this.world, chunkcoordintpair.x, chunkcoordintpair.z, com.destroystokyo.paper.io.IOUtil.getPriorityForCurrentThread(),
++ false, true, true).join().chunkData;
++
++ if (ret == com.destroystokyo.paper.io.ConcreteFileIOThread.FAILURE_VALUE) {
++ throw new IOException("See logs for further detail");
++ }
++ return ret;
++ }
++ return super.read(chunkcoordintpair);
++ }
++
++ @Override
++ public void write(ChunkCoordIntPair chunkcoordintpair, NBTTagCompound nbttagcompound) throws IOException {
++ if (Thread.currentThread() != com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE) {
++ com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE.scheduleSave(
++ this.world, chunkcoordintpair.x, chunkcoordintpair.z, null, nbttagcompound,
++ com.destroystokyo.paper.io.IOUtil.getPriorityForCurrentThread());
++
++ Boolean ret = com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE.waitForIOToComplete(this.world,
++ chunkcoordintpair.x, chunkcoordintpair.z, true, false);
++
++ if (ret == Boolean.FALSE) {
++ throw new IOException("See logs for further detail");
++ }
++ return;
++ }
++ super.write(chunkcoordintpair, nbttagcompound);
++ }
++ // Paper end
++
+ @Nullable
+ public NBTTagCompound readChunkData(ChunkCoordIntPair chunkcoordintpair) throws IOException { // Paper - private -> public
+ NBTTagCompound nbttagcompound = this.read(chunkcoordintpair);
+@@ -901,12 +1066,42 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+
+ // Paper start - chunk status cache "api"
+ public ChunkStatus getChunkStatusOnDiskIfCached(ChunkCoordIntPair chunkPos) {
++ // Paper start - async chunk save for unload
++ IChunkAccess unloadingChunk = this.world.asyncChunkTaskManager.getChunkInSaveProgress(chunkPos.x, chunkPos.z);
++ if (unloadingChunk != null) {
++ return unloadingChunk.getChunkStatus();
++ }
++ // Paper end
++ // Paper start - async io
++ NBTTagCompound inProgressWrite = com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE
++ .getPendingWrite(this.world, chunkPos.x, chunkPos.z, false);
++
++ if (inProgressWrite != null) {
++ return ChunkRegionLoader.getStatus(inProgressWrite);
++ }
++ // Paper end
++
+ RegionFile regionFile = this.getRegionFileIfLoaded(chunkPos);
+
+ return regionFile == null ? null : regionFile.getStatusIfCached(chunkPos.x, chunkPos.z);
+ }
+
+ public ChunkStatus getChunkStatusOnDisk(ChunkCoordIntPair chunkPos) throws IOException {
++ // Paper start - async chunk save for unload
++ IChunkAccess unloadingChunk = this.world.asyncChunkTaskManager.getChunkInSaveProgress(chunkPos.x, chunkPos.z);
++ if (unloadingChunk != null) {
++ return unloadingChunk.getChunkStatus();
++ }
++ // Paper end
++ // Paper start - async io
++ NBTTagCompound inProgressWrite = com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE
++ .getPendingWrite(this.world, chunkPos.x, chunkPos.z, false);
++
++ if (inProgressWrite != null) {
++ return ChunkRegionLoader.getStatus(inProgressWrite);
++ }
++ // Paper end
++ synchronized (this) { // Paper - async io
+ RegionFile regionFile = this.getRegionFile(chunkPos, false);
+
+ if (!regionFile.chunkExists(chunkPos)) {
+@@ -918,17 +1113,55 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+ if (status != null) {
+ return status;
+ }
++ // Paper start - async io
++ }
+
+- this.readChunkData(chunkPos);
++ NBTTagCompound compound = this.readChunkData(chunkPos);
+
+- return regionFile.getStatusIfCached(chunkPos.x, chunkPos.z);
++ return ChunkRegionLoader.getStatus(compound);
++ // Paper end
+ }
+
+ public void updateChunkStatusOnDisk(ChunkCoordIntPair chunkPos, @Nullable NBTTagCompound compound) throws IOException {
++ synchronized (this) { // Paper - async io
+ RegionFile regionFile = this.getRegionFile(chunkPos, false);
+
+ regionFile.setStatus(chunkPos.x, chunkPos.z, ChunkRegionLoader.getStatus(compound));
++ } // Paper - async io
+ }
++
++ // Paper start - async io
++ // this function will not load chunk data off disk to check for status
++ // ret null for unknown, empty for empty status on disk or absent from disk
++ public ChunkStatus getStatusOnDiskNoLoad(int x, int z) {
++ // Paper start - async chunk save for unload
++ IChunkAccess unloadingChunk = this.world.asyncChunkTaskManager.getChunkInSaveProgress(x, z);
++ if (unloadingChunk != null) {
++ return unloadingChunk.getChunkStatus();
++ }
++ // Paper end
++ // Paper start - async io
++ net.minecraft.server.NBTTagCompound inProgressWrite = com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE
++ .getPendingWrite(this.world, x, z, false);
++
++ if (inProgressWrite != null) {
++ return net.minecraft.server.ChunkRegionLoader.getStatus(inProgressWrite);
++ }
++ // Paper end
++ // variant of PlayerChunkMap#getChunkStatusOnDisk that does not load data off disk, but loads the region file
++ ChunkCoordIntPair chunkPos = new ChunkCoordIntPair(x, z);
++ synchronized (world.getChunkProvider().playerChunkMap) {
++ net.minecraft.server.RegionFile file;
++ try {
++ file = world.getChunkProvider().playerChunkMap.getRegionFile(chunkPos, false);
++ } catch (IOException ex) {
++ throw new RuntimeException(ex);
++ }
++
++ return !file.chunkExists(chunkPos) ? ChunkStatus.EMPTY : file.getStatusIfCached(x, z);
++ }
++ }
++ // Paper end
+ // Paper end
+
+ boolean isOutsideOfRange(ChunkCoordIntPair chunkcoordintpair) {
+@@ -1272,6 +1505,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
+
+ }
+
++ public VillagePlace getVillagePlace() { return this.h(); } // Paper - OBFHELPER
+ protected VillagePlace h() {
+ return this.n;
+ }
+diff --git a/src/main/java/net/minecraft/server/RegionFile.java b/src/main/java/net/minecraft/server/RegionFile.java
+index 66c8b0307f..2ee4b88f09 100644
+--- a/src/main/java/net/minecraft/server/RegionFile.java
++++ b/src/main/java/net/minecraft/server/RegionFile.java
+@@ -337,7 +337,7 @@ public class RegionFile implements AutoCloseable {
+ this.writeInt(i); // Paper - Avoid 3 io write calls
+ }
+
+- public void close() throws IOException {
++ public synchronized void close() throws IOException { // Paper - synchronize
+ this.closed = true; // Paper
+ this.b.close();
+ }
+diff --git a/src/main/java/net/minecraft/server/RegionFileCache.java b/src/main/java/net/minecraft/server/RegionFileCache.java
+index e0fdf5f90f..d9283b36b6 100644
+--- a/src/main/java/net/minecraft/server/RegionFileCache.java
++++ b/src/main/java/net/minecraft/server/RegionFileCache.java
+@@ -51,13 +51,13 @@ public abstract class RegionFileCache implements AutoCloseable {
+ }
+
+ // Paper start
+- public RegionFile getRegionFileIfLoaded(ChunkCoordIntPair chunkcoordintpair) {
++ public synchronized RegionFile getRegionFileIfLoaded(ChunkCoordIntPair chunkcoordintpair) { // Paper - synchronize for async io
+ return this.cache.getAndMoveToFirst(ChunkCoordIntPair.pair(chunkcoordintpair.getRegionX(), chunkcoordintpair.getRegionZ()));
+ }
+ // Paper end
+
+ public RegionFile getRegionFile(ChunkCoordIntPair chunkcoordintpair, boolean existingOnly) throws IOException { return this.a(chunkcoordintpair, existingOnly); } // Paper - OBFHELPER
+- private RegionFile a(ChunkCoordIntPair chunkcoordintpair, boolean existingOnly) throws IOException { // CraftBukkit
++ private synchronized RegionFile a(ChunkCoordIntPair chunkcoordintpair, boolean existingOnly) throws IOException { // CraftBukkit // Paper - synchronize for async io
+ long i = ChunkCoordIntPair.pair(chunkcoordintpair.getRegionX(), chunkcoordintpair.getRegionZ());
+ RegionFile regionfile = (RegionFile) this.cache.getAndMoveToFirst(i);
+
+@@ -344,7 +344,7 @@ public abstract class RegionFileCache implements AutoCloseable {
+ }
+
+ // CraftBukkit start
+- public boolean chunkExists(ChunkCoordIntPair pos) throws IOException {
++ public synchronized boolean chunkExists(ChunkCoordIntPair pos) throws IOException { // Paper - synchronize
+ copyIfNeeded(pos.x, pos.z); // Paper
+ RegionFile regionfile = a(pos, true);
+
+diff --git a/src/main/java/net/minecraft/server/RegionFileSection.java b/src/main/java/net/minecraft/server/RegionFileSection.java
+index a343a7b31d..7584174eb7 100644
+--- a/src/main/java/net/minecraft/server/RegionFileSection.java
++++ b/src/main/java/net/minecraft/server/RegionFileSection.java
+@@ -24,7 +24,7 @@ public class RegionFileSection extends RegionFi
+
+ private static final Logger LOGGER = LogManager.getLogger();
+ private final Long2ObjectMap> b = new Long2ObjectOpenHashMap();
+- private final LongLinkedOpenHashSet d = new LongLinkedOpenHashSet();
++ protected final LongLinkedOpenHashSet d = new LongLinkedOpenHashSet(); // Paper
+ private final BiFunction, R> e;
+ private final Function f;
+ private final DataFixer g;
+@@ -39,8 +39,8 @@ public class RegionFileSection extends RegionFi
+ }
+
+ protected void a(BooleanSupplier booleansupplier) {
+- while (!this.d.isEmpty() && booleansupplier.getAsBoolean()) {
+- ChunkCoordIntPair chunkcoordintpair = SectionPosition.a(this.d.firstLong()).u();
++ while (!this.d.isEmpty() && booleansupplier.getAsBoolean()) { // Paper - conflict here to avoid obfhelpers
++ ChunkCoordIntPair chunkcoordintpair = SectionPosition.a(this.d.firstLong()).u(); // Paper - conflict here to avoid obfhelpers
+
+ this.d(chunkcoordintpair);
+ }
+@@ -82,9 +82,9 @@ public class RegionFileSection extends RegionFi
+ Optional optional = this.d(i);
+
+ if (optional.isPresent()) {
+- return (MinecraftSerializable) optional.get();
++ return optional.get(); // Paper - decompile fix
+ } else {
+- R r0 = (MinecraftSerializable) this.f.apply(() -> {
++ R r0 = this.f.apply(() -> { // Paper - decompile fix
+ this.a(i);
+ });
+
+@@ -94,7 +94,12 @@ public class RegionFileSection extends RegionFi
+ }
+
+ private void b(ChunkCoordIntPair chunkcoordintpair) {
+- this.a(chunkcoordintpair, DynamicOpsNBT.a, this.c(chunkcoordintpair));
++ // Paper start - load data in function
++ this.loadInData(chunkcoordintpair, this.c(chunkcoordintpair));
++ }
++ public void loadInData(ChunkCoordIntPair chunkPos, NBTTagCompound compound) {
++ this.a(chunkPos, DynamicOpsNBT.a, compound);
++ // Paper end
+ }
+
+ @Nullable
+@@ -123,7 +128,7 @@ public class RegionFileSection extends RegionFi
+ for (int l = 0; l < 16; ++l) {
+ long i1 = SectionPosition.a(chunkcoordintpair, l).v();
+ Optional optional = optionaldynamic.get(Integer.toString(l)).get().map((dynamic2) -> {
+- return (MinecraftSerializable) this.e.apply(() -> {
++ return this.e.apply(() -> { // Paper - decompile fix
+ this.a(i1);
+ }, dynamic2);
+ });
+@@ -142,7 +147,7 @@ public class RegionFileSection extends RegionFi
+ }
+
+ private void d(ChunkCoordIntPair chunkcoordintpair) {
+- Dynamic dynamic = this.a(chunkcoordintpair, DynamicOpsNBT.a);
++ Dynamic dynamic = this.a(chunkcoordintpair, DynamicOpsNBT.a); // Paper - conflict here to avoid adding obfhelpers :)
+ NBTBase nbtbase = (NBTBase) dynamic.getValue();
+
+ if (nbtbase instanceof NBTTagCompound) {
+@@ -157,6 +162,20 @@ public class RegionFileSection extends RegionFi
+
+ }
+
++ // Paper start - internal get data function, copied from above
++ private NBTTagCompound getDataInternal(ChunkCoordIntPair chunkcoordintpair) {
++ Dynamic dynamic = this.a(chunkcoordintpair, DynamicOpsNBT.a);
++ NBTBase nbtbase = (NBTBase) dynamic.getValue();
++
++ if (nbtbase instanceof NBTTagCompound) {
++ return (NBTTagCompound)nbtbase;
++ } else {
++ RegionFileSection.LOGGER.error("Expected compound tag, got {}", nbtbase);
++ }
++ return null;
++ }
++ // Paper end
++
+ private Dynamic a(ChunkCoordIntPair chunkcoordintpair, DynamicOps dynamicops) {
+ Map map = Maps.newHashMap();
+
+@@ -193,9 +212,9 @@ public class RegionFileSection extends RegionFi
+ public void a(ChunkCoordIntPair chunkcoordintpair) {
+ if (!this.d.isEmpty()) {
+ for (int i = 0; i < 16; ++i) {
+- long j = SectionPosition.a(chunkcoordintpair, i).v();
++ long j = SectionPosition.a(chunkcoordintpair, i).v(); // Paper - conflict here to avoid obfhelpers
+
+- if (this.d.contains(j)) {
++ if (this.d.contains(j)) { // Paper - conflict here to avoid obfhelpers
+ this.d(chunkcoordintpair);
+ return;
+ }
+@@ -203,4 +222,21 @@ public class RegionFileSection extends RegionFi
+ }
+
+ }
++
++ // Paper start - get data function
++ public NBTTagCompound getData(ChunkCoordIntPair chunkcoordintpair) {
++ // Note: Copied from above
++ // This is checking if the data exists, then it builds it later in getDataInternal(ChunkCoordIntPair)
++ if (!this.d.isEmpty()) {
++ for (int i = 0; i < 16; ++i) {
++ long j = SectionPosition.a(chunkcoordintpair, i).v();
++
++ if (this.d.contains(j)) {
++ return this.getDataInternal(chunkcoordintpair);
++ }
++ }
++ }
++ return null;
++ }
++ // Paper end
+ }
+diff --git a/src/main/java/net/minecraft/server/TicketType.java b/src/main/java/net/minecraft/server/TicketType.java
+index 9c114d2d37..e3150f85a5 100644
+--- a/src/main/java/net/minecraft/server/TicketType.java
++++ b/src/main/java/net/minecraft/server/TicketType.java
+@@ -22,6 +22,7 @@ public class TicketType {
+ public static final TicketType PLUGIN = a("plugin", (a, b) -> 0); // CraftBukkit
+ public static final TicketType PLUGIN_TICKET = a("plugin_ticket", (plugin1, plugin2) -> plugin1.getClass().getName().compareTo(plugin2.getClass().getName())); // Craftbukkit
+ public static final TicketType ANTIXRAY = a("antixray", Integer::compareTo); // Paper - Anti-Xray
++ public static final TicketType ASYNC_LOAD = a("async_load", Long::compareTo); // Paper
+
+ public static TicketType a(String s, Comparator comparator) {
+ return new TicketType<>(s, comparator, 0L);
+diff --git a/src/main/java/net/minecraft/server/VillagePlace.java b/src/main/java/net/minecraft/server/VillagePlace.java
+index b0e6ad773e..f6c95ae8c1 100644
+--- a/src/main/java/net/minecraft/server/VillagePlace.java
++++ b/src/main/java/net/minecraft/server/VillagePlace.java
+@@ -20,8 +20,16 @@ public class VillagePlace extends RegionFileSection {
+
+ private final VillagePlace.a a = new VillagePlace.a();
+
++ private final WorldServer world; // Paper
++
+ public VillagePlace(File file, DataFixer datafixer) {
++ // Paper start
++ this(file, datafixer, null);
++ }
++ public VillagePlace(File file, DataFixer datafixer, WorldServer world) {
++ // Paper end
+ super(file, VillagePlaceSection::new, VillagePlaceSection::new, datafixer, DataFixTypes.POI_CHUNK);
++ this.world = world; // Paper
+ }
+
+ public void a(BlockPosition blockposition, VillagePlaceType villageplacetype) {
+@@ -121,7 +129,23 @@ public class VillagePlace extends RegionFileSection {
+
+ @Override
+ public void a(BooleanSupplier booleansupplier) {
+- super.a(booleansupplier);
++ // Paper start - async chunk io
++ if (this.world == null) {
++ super.a(booleansupplier);
++ } else {
++ //super.a(booleansupplier); // re-implement below
++ while (!((RegionFileSection)this).d.isEmpty() && booleansupplier.getAsBoolean()) {
++ ChunkCoordIntPair chunkcoordintpair = SectionPosition.a(((RegionFileSection)this).d.firstLong()).u();
++
++ NBTTagCompound data;
++ try (co.aikar.timings.Timing ignored1 = this.world.timings.poiSaveDataSerialization.startTiming()) {
++ data = this.getData(chunkcoordintpair);
++ }
++ com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE.scheduleSave(this.world,
++ chunkcoordintpair.x, chunkcoordintpair.z, data, null, com.destroystokyo.paper.io.PrioritizedTaskQueue.LOW_PRIORITY);
++ }
++ }
++ // Paper end
+ this.a.a();
+ }
+
+@@ -157,7 +181,7 @@ public class VillagePlace extends RegionFileSection {
+ }
+
+ private static boolean a(ChunkSection chunksection) {
+- Stream stream = VillagePlaceType.f();
++ Stream stream = VillagePlaceType.f(); // Paper - decompile fix
+
+ chunksection.getClass();
+ return stream.anyMatch(chunksection::a);
+@@ -207,6 +231,42 @@ public class VillagePlace extends RegionFileSection {
+ }
+ }
+
++ // Paper start - Asynchronous chunk io
++ @javax.annotation.Nullable
++ @Override
++ public NBTTagCompound read(ChunkCoordIntPair chunkcoordintpair) throws java.io.IOException {
++ if (this.world != null && Thread.currentThread() != com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE) {
++ NBTTagCompound ret = com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE
++ .loadChunkDataAsyncFuture(this.world, chunkcoordintpair.x, chunkcoordintpair.z, com.destroystokyo.paper.io.IOUtil.getPriorityForCurrentThread(),
++ true, false, true).join().poiData;
++
++ if (ret == com.destroystokyo.paper.io.ConcreteFileIOThread.FAILURE_VALUE) {
++ throw new java.io.IOException("See logs for further detail");
++ }
++ return ret;
++ }
++ return super.read(chunkcoordintpair);
++ }
++
++ @Override
++ public void write(ChunkCoordIntPair chunkcoordintpair, NBTTagCompound nbttagcompound) throws java.io.IOException {
++ if (this.world != null && Thread.currentThread() != com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE) {
++ com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE.scheduleSave(
++ this.world, chunkcoordintpair.x, chunkcoordintpair.z, nbttagcompound, null,
++ com.destroystokyo.paper.io.IOUtil.getPriorityForCurrentThread());
++
++ Boolean ret = com.destroystokyo.paper.io.ConcreteFileIOThread.Holder.INSTANCE.waitForIOToComplete(this.world,
++ chunkcoordintpair.x, chunkcoordintpair.z, true, true);
++
++ if (ret == Boolean.FALSE) {
++ throw new java.io.IOException("See logs for further detail");
++ }
++ return;
++ }
++ super.write(chunkcoordintpair, nbttagcompound);
++ }
++ // Paper end
++
+ public static enum Occupancy {
+
+ HAS_SPACE(VillagePlaceRecord::d), IS_OCCUPIED(VillagePlaceRecord::e), ANY((villageplacerecord) -> {
+@@ -215,7 +275,7 @@ public class VillagePlace extends RegionFileSection {
+
+ private final Predicate super VillagePlaceRecord> d;
+
+- private Occupancy(Predicate predicate) {
++ private Occupancy(Predicate super VillagePlaceRecord> predicate) { // Paper - decompile fix
+ this.d = predicate;
+ }
+
+diff --git a/src/main/java/net/minecraft/server/WorldServer.java b/src/main/java/net/minecraft/server/WorldServer.java
+index 42cee6ab9e..83e47b6ad2 100644
+--- a/src/main/java/net/minecraft/server/WorldServer.java
++++ b/src/main/java/net/minecraft/server/WorldServer.java
+@@ -1,9 +1,9 @@
+ package net.minecraft.server;
+
+ import co.aikar.timings.TimingHistory;
+-import co.aikar.timings.Timings;
+
+ import com.destroystokyo.paper.PaperWorldConfig;
++import com.destroystokyo.paper.io.chunk.ChunkTaskManager;
+ import com.google.common.collect.Lists;
+ import com.google.common.collect.Maps;
+ import com.google.common.collect.Queues;
+@@ -78,6 +78,79 @@ public class WorldServer extends World {
+ return new Throwable(entity + " Added to world at " + new java.util.Date());
+ }
+
++ // Paper start - Asynchronous IO
++ public final com.destroystokyo.paper.io.ConcreteFileIOThread.ChunkDataController poiDataController = new com.destroystokyo.paper.io.ConcreteFileIOThread.ChunkDataController() {
++ @Override
++ public void writeData(int x, int z, NBTTagCompound compound) throws java.io.IOException {
++ WorldServer.this.getChunkProvider().playerChunkMap.getVillagePlace().write(new ChunkCoordIntPair(x, z), compound);
++ }
++
++ @Override
++ public NBTTagCompound readData(int x, int z) throws java.io.IOException {
++ return WorldServer.this.getChunkProvider().playerChunkMap.getVillagePlace().read(new ChunkCoordIntPair(x, z));
++ }
++
++ @Override
++ public T computeForRegionFile(int chunkX, int chunkZ, java.util.function.Function function) {
++ synchronized (WorldServer.this.getChunkProvider().playerChunkMap.getVillagePlace()) {
++ RegionFile file;
++
++ try {
++ file = WorldServer.this.getChunkProvider().playerChunkMap.getVillagePlace().getRegionFile(new ChunkCoordIntPair(chunkX, chunkZ), false);
++ } catch (java.io.IOException ex) {
++ throw new RuntimeException(ex);
++ }
++
++ return function.apply(file);
++ }
++ }
++
++ @Override
++ public T computeForRegionFileIfLoaded(int chunkX, int chunkZ, java.util.function.Function function) {
++ synchronized (WorldServer.this.getChunkProvider().playerChunkMap.getVillagePlace()) {
++ RegionFile file = WorldServer.this.getChunkProvider().playerChunkMap.getVillagePlace().getRegionFileIfLoaded(new ChunkCoordIntPair(chunkX, chunkZ));
++ return function.apply(file);
++ }
++ }
++ };
++
++ public final com.destroystokyo.paper.io.ConcreteFileIOThread.ChunkDataController chunkDataController = new com.destroystokyo.paper.io.ConcreteFileIOThread.ChunkDataController() {
++ @Override
++ public void writeData(int x, int z, NBTTagCompound compound) throws java.io.IOException {
++ WorldServer.this.getChunkProvider().playerChunkMap.write(new ChunkCoordIntPair(x, z), compound);
++ }
++
++ @Override
++ public NBTTagCompound readData(int x, int z) throws java.io.IOException {
++ return WorldServer.this.getChunkProvider().playerChunkMap.read(new ChunkCoordIntPair(x, z));
++ }
++
++ @Override
++ public T computeForRegionFile(int chunkX, int chunkZ, java.util.function.Function function) {
++ synchronized (WorldServer.this.getChunkProvider().playerChunkMap) {
++ RegionFile file;
++
++ try {
++ file = WorldServer.this.getChunkProvider().playerChunkMap.getRegionFile(new ChunkCoordIntPair(chunkX, chunkZ), false);
++ } catch (java.io.IOException ex) {
++ throw new RuntimeException(ex);
++ }
++
++ return function.apply(file);
++ }
++ }
++
++ @Override
++ public T computeForRegionFileIfLoaded(int chunkX, int chunkZ, java.util.function.Function function) {
++ synchronized (WorldServer.this.getChunkProvider().playerChunkMap) {
++ RegionFile file = WorldServer.this.getChunkProvider().playerChunkMap.getRegionFileIfLoaded(new ChunkCoordIntPair(chunkX, chunkZ));
++ return function.apply(file);
++ }
++ }
++ };
++ public final ChunkTaskManager asyncChunkTaskManager;
++ // Paper end
++
+ // Add env and gen to constructor
+ public WorldServer(MinecraftServer minecraftserver, Executor executor, WorldNBTStorage worldnbtstorage, WorldData worlddata, DimensionManager dimensionmanager, GameProfilerFiller gameprofilerfiller, WorldLoadListener worldloadlistener, org.bukkit.World.Environment env, org.bukkit.generator.ChunkGenerator gen) {
+ super(worlddata, dimensionmanager, (world, worldprovider) -> {
+@@ -121,6 +194,8 @@ public class WorldServer extends World {
+
+ this.mobSpawnerTrader = this.worldProvider.getDimensionManager().getType() == DimensionManager.OVERWORLD ? new MobSpawnerTrader(this) : null; // CraftBukkit - getType()
+ this.getServer().addWorld(this.getWorld()); // CraftBukkit
++
++ this.asyncChunkTaskManager = new ChunkTaskManager(this); // Paper
+ }
+
+ public void doTick(BooleanSupplier booleansupplier) {
+diff --git a/src/main/java/org/bukkit/craftbukkit/CraftWorld.java b/src/main/java/org/bukkit/craftbukkit/CraftWorld.java
+index 48f3a784a8..aebd72b860 100644
+--- a/src/main/java/org/bukkit/craftbukkit/CraftWorld.java
++++ b/src/main/java/org/bukkit/craftbukkit/CraftWorld.java
+@@ -531,22 +531,23 @@ public class CraftWorld implements World {
+ }
+
+ if (!generate) {
+- net.minecraft.server.RegionFile file;
+- try {
+- file = world.getChunkProvider().playerChunkMap.getRegionFile(chunkPos, false);
+- } catch (IOException ex) {
+- throw new RuntimeException(ex);
+- }
++ ChunkStatus status = world.getChunkProvider().playerChunkMap.getStatusOnDiskNoLoad(x, z); // Paper - async io - move to own method
+
+- ChunkStatus status = file.getStatusIfCached(x, z);
+- if (!file.chunkExists(chunkPos) || (status != null && status != ChunkStatus.FULL)) {
++ // Paper start - async io
++ if (status == ChunkStatus.EMPTY) {
++ // does not exist on disk
+ return false;
+ }
+
++ if (status == null) { // at this stage we don't know what it is on disk
+ IChunkAccess chunk = world.getChunkProvider().getChunkAt(x, z, ChunkStatus.EMPTY, true);
+ if (!(chunk instanceof ProtoChunkExtension) && !(chunk instanceof net.minecraft.server.Chunk)) {
+ return false;
+ }
++ } else if (status != ChunkStatus.FULL) {
++ return false; // not full status on disk
++ }
++ // Paper end
+
+ // fall through to load
+ // we do this so we do not re-read the chunk data on disk
+@@ -2323,16 +2324,17 @@ public class CraftWorld implements World {
+
+ @Override
+ public CompletableFuture getChunkAtAsync(int x, int z, boolean gen) {
+- // TODO placeholder
+- if (Bukkit.isPrimaryThread()) {
+- return CompletableFuture.completedFuture(getChunkAtGen(x, z, gen));
+- } else {
+- CompletableFuture ret = new CompletableFuture<>();
+- net.minecraft.server.MinecraftServer.getServer().scheduleOnMain(() -> {
+- ret.complete(getChunkAtGen(x, z, gen));
+- });
+- return ret;
++ net.minecraft.server.Chunk immediate = this.world.getChunkProvider().getChunkAtIfLoadedImmediately(x, z);
++ if (immediate != null) {
++ return CompletableFuture.completedFuture(immediate.bukkitChunk);
+ }
++
++ CompletableFuture ret = new CompletableFuture<>();
++ this.world.getChunkProvider().getChunkAtAsynchronously(x, z, gen, (net.minecraft.server.Chunk chunk) -> {
++ ret.complete(chunk == null ? null : chunk.bukkitChunk);
++ });
++
++ return ret;
+ }
+ // Paper end
+
+--
+2.20.1
+
diff --git a/patches/server/0054-Reduce-sync-loads.patch b/patches/server/0054-Reduce-sync-loads.patch
new file mode 100644
index 000000000..f52ce9c6f
--- /dev/null
+++ b/patches/server/0054-Reduce-sync-loads.patch
@@ -0,0 +1,291 @@
+From d02b684c04499ecbdb0baade56dffc3922913f18 Mon Sep 17 00:00:00 2001
+From: Spottedleaf
+Date: Fri, 19 Jul 2019 03:29:14 -0700
+Subject: [PATCH] Reduce sync loads
+
+This reduces calls to getChunkAt which would load chunks.
+
+This patch also adds a tool to find calls which are doing this, however
+it must be enabled by setting the startup flag -Dpaper.debug-sync-loads=true
+
+To get a debug log for sync loads, the command is /paper syncloadinfo
+---
+ .../com/destroystokyo/paper/PaperCommand.java | 43 ++++++
+ .../paper/io/SyncLoadFinder.java | 142 ++++++++++++++++++
+ .../minecraft/server/ChunkProviderServer.java | 1 +
+ src/main/java/net/minecraft/server/World.java | 6 +-
+ 4 files changed, 189 insertions(+), 3 deletions(-)
+ create mode 100644 src/main/java/com/destroystokyo/paper/io/SyncLoadFinder.java
+
+diff --git a/src/main/java/com/destroystokyo/paper/PaperCommand.java b/src/main/java/com/destroystokyo/paper/PaperCommand.java
+index 8db92edc36..c7ff5132e2 100644
+--- a/src/main/java/com/destroystokyo/paper/PaperCommand.java
++++ b/src/main/java/com/destroystokyo/paper/PaperCommand.java
+@@ -1,9 +1,13 @@
+ package com.destroystokyo.paper;
+
++import com.destroystokyo.paper.io.SyncLoadFinder;
+ import com.google.common.base.Functions;
+ import com.google.common.collect.Iterables;
+ import com.google.common.collect.Lists;
+ import com.google.common.collect.Maps;
++import com.google.gson.JsonObject;
++import com.google.gson.internal.Streams;
++import com.google.gson.stream.JsonWriter;
+ import net.minecraft.server.*;
+ import org.apache.commons.lang3.tuple.MutablePair;
+ import org.apache.commons.lang3.tuple.Pair;
+@@ -18,6 +22,9 @@ import org.bukkit.craftbukkit.CraftWorld;
+ import org.bukkit.entity.Player;
+
+ import java.io.File;
++import java.io.FileOutputStream;
++import java.io.PrintStream;
++import java.io.StringWriter;
+ import java.time.LocalDateTime;
+ import java.time.format.DateTimeFormatter;
+ import java.util.*;
+@@ -130,6 +137,9 @@ public class PaperCommand extends Command {
+ case "chunkinfo":
+ doChunkInfo(sender, args);
+ break;
++ case "syncloadinfo":
++ this.doSyncLoadInfo(sender, args);
++ break;
+ case "ver":
+ case "version":
+ Command ver = org.bukkit.Bukkit.getServer().getCommandMap().getCommand("version");
+@@ -146,6 +156,39 @@ public class PaperCommand extends Command {
+ return true;
+ }
+
++ private void doSyncLoadInfo(CommandSender sender, String[] args) {
++ if (!SyncLoadFinder.ENABLED) {
++ sender.sendMessage(ChatColor.RED + "This command requires the server startup flag '-Dpaper.debug-sync-loads=true' to be set.");
++ return;
++ }
++ File file = new File(new File(new File("."), "debug"),
++ "sync-load-info" + DateTimeFormatter.ofPattern("yyyy-MM-dd_HH.mm.ss").format(LocalDateTime.now()) + ".txt");
++ sender.sendMessage(ChatColor.GREEN + "Writing sync load info to " + file.toString());
++
++
++ try {
++ final JsonObject data = SyncLoadFinder.serialize();
++
++ StringWriter stringWriter = new StringWriter();
++ JsonWriter jsonWriter = new JsonWriter(stringWriter);
++ jsonWriter.setIndent(" ");
++ jsonWriter.setLenient(false);
++ Streams.write(data, jsonWriter);
++
++ String fileData = stringWriter.toString();
++
++ try (
++ PrintStream out = new PrintStream(new FileOutputStream(file), false, "UTF-8")
++ ) {
++ out.print(fileData);
++ }
++ sender.sendMessage(ChatColor.GREEN + "Successfully written sync load information!");
++ } catch (Throwable thr) {
++ sender.sendMessage(ChatColor.RED + "Failed to write sync load information");
++ thr.printStackTrace();
++ }
++ }
++
+ private void doChunkInfo(CommandSender sender, String[] args) {
+ List worlds;
+ if (args.length < 2 || args[1].equals("*")) {
+diff --git a/src/main/java/com/destroystokyo/paper/io/SyncLoadFinder.java b/src/main/java/com/destroystokyo/paper/io/SyncLoadFinder.java
+new file mode 100644
+index 0000000000..ad6c5ff0d5
+--- /dev/null
++++ b/src/main/java/com/destroystokyo/paper/io/SyncLoadFinder.java
+@@ -0,0 +1,142 @@
++package com.destroystokyo.paper.io;
++
++import com.google.gson.JsonArray;
++import com.google.gson.JsonObject;
++import com.mojang.datafixers.util.Pair;
++import it.unimi.dsi.fastutil.objects.Object2IntOpenHashMap;
++import net.minecraft.server.World;
++
++import java.util.ArrayList;
++import java.util.List;
++import java.util.Map;
++import java.util.WeakHashMap;
++
++public class SyncLoadFinder {
++
++ public static final boolean ENABLED = Boolean.getBoolean("paper.debug-sync-loads");
++
++ private static final WeakHashMap> SYNC_LOADS = new WeakHashMap<>();
++
++ public static void logSyncLoad(final World world, final int chunkX, final int chunkZ) {
++ if (!ENABLED) {
++ return;
++ }
++
++ final ThrowableWithEquals stacktrace = new ThrowableWithEquals(Thread.currentThread().getStackTrace());
++
++ SYNC_LOADS.compute(world, (final World keyInMap, Object2IntOpenHashMap map) -> {
++ if (map == null) {
++ map = new Object2IntOpenHashMap<>();
++ }
++
++ map.computeInt(stacktrace, (ThrowableWithEquals keyInMap0, Integer valueInMap) -> {
++ return valueInMap == null ? Integer.valueOf(1) : Integer.valueOf(valueInMap.intValue() + 1);
++ });
++
++ return map;
++ });
++ }
++
++ public static JsonObject serialize() {
++ final JsonObject ret = new JsonObject();
++
++ final JsonArray worldsData = new JsonArray();
++
++ for (final Map.Entry> entry : SYNC_LOADS.entrySet()) {
++ final World world = entry.getKey();
++
++ final JsonObject worldData = new JsonObject();
++
++ worldData.addProperty("name", world.getWorld().getName());
++
++ final List> data = new ArrayList<>();
++
++ entry.getValue().forEach((ThrowableWithEquals stacktrace, Integer times) -> {
++ data.add(new Pair<>(stacktrace, times));
++ });
++
++ data.sort((Pair pair1, Pair pair2) -> {
++ return pair2.getSecond().compareTo(pair1.getSecond()); // reverse order
++ });
++
++ final JsonArray stacktraces = new JsonArray();
++
++ for (Pair pair : data) {
++ final JsonObject stacktrace = new JsonObject();
++
++ stacktrace.addProperty("times", pair.getSecond());
++
++ final JsonArray traces = new JsonArray();
++
++ for (StackTraceElement element : pair.getFirst().stacktrace) {
++ traces.add(String.valueOf(element));
++ }
++
++ stacktrace.add("stacktrace", traces);
++
++ stacktraces.add(stacktrace);
++ }
++
++
++ worldData.add("stacktraces", stacktraces);
++ worldsData.add(worldData);
++ }
++
++ ret.add("worlds", worldsData);
++
++ return ret;
++ }
++
++ static final class ThrowableWithEquals {
++
++ private final StackTraceElement[] stacktrace;
++ private final int hash;
++
++ public ThrowableWithEquals(final StackTraceElement[] stacktrace) {
++ this.stacktrace = stacktrace;
++ this.hash = ThrowableWithEquals.hash(stacktrace);
++ }
++
++ public static int hash(final StackTraceElement[] stacktrace) {
++ int hash = 0;
++
++ for (int i = 0; i < stacktrace.length; ++i) {
++ hash *= 31;
++ hash += stacktrace[i].hashCode();
++ }
++
++ return hash;
++ }
++
++ @Override
++ public int hashCode() {
++ return this.hash;
++ }
++
++ @Override
++ public boolean equals(final Object obj) {
++ if (obj == null || obj.getClass() != this.getClass()) {
++ return false;
++ }
++
++ final ThrowableWithEquals other = (ThrowableWithEquals)obj;
++ final StackTraceElement[] otherStackTrace = other.stacktrace;
++
++ if (this.stacktrace.length != otherStackTrace.length) {
++ return false;
++ }
++
++ if (this == obj) {
++ return true;
++ }
++
++ for (int i = 0; i < this.stacktrace.length; ++i) {
++ if (!this.stacktrace[i].equals(otherStackTrace[i])) {
++ return false;
++ }
++ }
++
++ return true;
++ }
++ }
++}
+diff --git a/src/main/java/net/minecraft/server/ChunkProviderServer.java b/src/main/java/net/minecraft/server/ChunkProviderServer.java
+index e75e311376..7ade9a53b4 100644
+--- a/src/main/java/net/minecraft/server/ChunkProviderServer.java
++++ b/src/main/java/net/minecraft/server/ChunkProviderServer.java
+@@ -321,6 +321,7 @@ public class ChunkProviderServer extends IChunkProvider {
+ // Paper start - async chunk io // Paper start - async chunk loading
+ this.world.asyncChunkTaskManager.raisePriority(x, z, com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGHEST_PRIORITY);
+ // Paper end
++ com.destroystokyo.paper.io.SyncLoadFinder.logSyncLoad(this.world, x, z); // Paper - sync load info
+ this.world.timings.chunkAwait.startTiming(); // Paper
+ this.serverThreadQueue.awaitTasks(completablefuture::isDone);
+ this.world.timings.chunkAwait.stopTiming(); // Paper
+diff --git a/src/main/java/net/minecraft/server/World.java b/src/main/java/net/minecraft/server/World.java
+index 9f657b01fc..eb6929e2b0 100644
+--- a/src/main/java/net/minecraft/server/World.java
++++ b/src/main/java/net/minecraft/server/World.java
+@@ -1252,7 +1252,7 @@ public abstract class World implements IIBlockAccess, GeneratorAccess, AutoClose
+
+ for (int i1 = i; i1 <= j; ++i1) {
+ for (int j1 = k; j1 <= l; ++j1) {
+- Chunk chunk = this.getChunkProvider().getChunkAt(i1, j1, false);
++ Chunk chunk = (Chunk)this.getChunkIfLoadedImmediately(i1, j1); // Paper
+
+ if (chunk != null) {
+ chunk.a(entity, axisalignedbb, list, predicate);
+@@ -1272,7 +1272,7 @@ public abstract class World implements IIBlockAccess, GeneratorAccess, AutoClose
+
+ for (int i1 = i; i1 < j; ++i1) {
+ for (int j1 = k; j1 < l; ++j1) {
+- Chunk chunk = this.getChunkProvider().getChunkAt(i1, j1, false);
++ Chunk chunk = (Chunk)this.getChunkIfLoadedImmediately(i1, j1); // Paper
+
+ if (chunk != null) {
+ chunk.a(entitytypes, axisalignedbb, list, predicate);
+@@ -1294,7 +1294,7 @@ public abstract class World implements IIBlockAccess, GeneratorAccess, AutoClose
+
+ for (int i1 = i; i1 < j; ++i1) {
+ for (int j1 = k; j1 < l; ++j1) {
+- Chunk chunk = ichunkprovider.getChunkAt(i1, j1, false);
++ Chunk chunk = (Chunk)this.getChunkIfLoadedImmediately(i1, j1); // Paper
+
+ if (chunk != null) {
+ chunk.a(oclass, axisalignedbb, list, predicate);
+--
+2.20.1
+