How I Automated Minecraft Server Builds

If you have kids that are old enough to game on any sort of device with a screen, you’ve probably been asked about virtual Lego kits. And I don’t mean the various branded video games like LEGO Worlds or the LEGO Star Wars games. No, I’m talking about something far more addictive – Minecraft.

My kids are Minecraft fanatics. They could play for hours on end while creative how-tos and “Let’s Play” YouTube videos loop non-stop in the background. And they claim they want to play Minecraft together, although that’s more theory than actual practice in the end. They also like to experiment and try to build the different things they see on YouTube. They wanted multiple worlds to use as playgrounds for their different ideas.

And they even got me to play a few times.

So during the summer of 2020, I started looking into how I could build Minecraft server appliances. I had built a few Minecraft servers by hand before that, but they were difficult to maintain and keep up-to-date with Minecraft Server dot releases and general operating system maintenance.

I thought a virtual appliance would be the best way to do this, and this is my opinionated way of building a Minecraft server.

TL;DR: Here is the link to my GitHub repo with the Packer files and scripts that I use.

A little bit of history

The initial version of the virtual appliance was built on Photon. Photon is a stripped down version of Linux created by my employer for virtual appliances and running container workloads. William Lam has some great content on how to create a Photon-based virtual appliance using Packer.

This setup worked pretty well until Minecraft released version 1.17, also known as the Caves and Cliffs version, in the summer of 2021.

There are a couple of versions of Minecraft. The two main ones are Bedrock, which is geared towards Windows, mobile devices, and video game consoles, and Java, which uses Java and only runs on Windows, Linux, and Mac.

My kids play Java edition, and up until this point, Minecraft Java edition servers used the Java8 JDK. Minecraft 1.17, however, required the Java16 JDK. And that led to a second problem. The only JDK in the Photon repositories at the time was for Java8.

Now this doesn’t seem like a problem, or at least it isn’t on a small scale. There are a few open-source OpenJDK implementations that I could adopt. I ended up going with Adoptium’s Temurin OpenJDK. But after building a server or two, I didn’t really feel like maintaining a manual install process. I wanted the ease of use that came with a installing and updating from a package repository, and that wasn’t available for Photon.

So I needed a different Linux distribution. CentOS would have been my first choice, but I didn’t want something that was basically a rolling release candidate. My colleague Timo Sugliani spoke very highly of Debian, and he released a set of Packer templates for building lightweight Debian virtual appliances on GitHub. I modified these templates to use the Packer vSphere-ISO plugin and started porting over my appliance build process.

Customizing the Minecraft Experience

Do you want a flat world or something without mob spawns? Or try out a custom world seed? You can set that during the appliance deployment. I wanted the appliance to be self-configuring so I spent some time extending William Lam’s OVF properties XML file to include all of the Minecraft server attributes that you can configure in the Server.Properties file. This allows you to deploy the appliance and configure the Minecraft environment without having to SSH into it to manually edit the file.

One day, I may trust my kids enough to give them limited access to vCenter to deploy their own servers. This would make it easier for them.

Unfortunately, that day is not today. But this still makes my life easier.

Installing and Configuring Minecraft

The OVF file does not contain the Minecraft Server binaries. It actually gets installed during the appliance’s first boot. There are few reasons for this. First, the Minecraft EULA does not allow you to distribute the binaries. At least that was my understanding of it.

Second, and more importantly, you may not always want the latest and greatest server version, especially if you’re planning to develop or use mods. Mods are often developed against specific Minecraft versions, and they have a lengthy interoperability chart.

The appliance is not built to utilize mods out of the box, but there is nothing stopping someone from installing Forge, Fabric, or other modified binaries. I just don’t feel like taking on that level of effort, and my kids have so far resisted learning important life skills like the Bash CLI.

And finally, there isn’t much difference between downloading and installing the server binary on first boot and downloading and installing an updated binary. Minecraft Java edition is distributed as a JAR file, so I only really need to download it and place it in the correct folder.

I have a pair of PowerShell scripts that make these processes pretty easy. Both scripts have the same core function – query an online version manifest that is used by the Minecraft client and download the specified version to the local machine. The update script also has some extra logic in it to check if the service is running and gracefully stop it before downloading the updated server.jar file.

You can find these scripts in the files directory on GitHub.

Running Minecraft as a systemd Service

Finally, I didn’t want to have to deal with manually starting or restarting the Minecraft service. So I Googled around, and I found a bunch of systemd sample files. I did a lot of testing with these samples (and I apologize, I did not keep track of the links I used when creating my service file) to cobble together one of my own.

My service file has an external dependency. The MCRCON tool is required to shut down the service. While I was testing this, I ran into a number of issues where I could stop Minecraft, but it wouldn’t kill the Java process that spawned with it. It also didn’t guarantee that the world was properly saved or that users were alerted to the shutdown.

By using MCRCON, we can alert users to the shutdown, save the world, and gracefully exit all of the processes through a server shutdown command.

I also have the Minecraft service set to restart on failure. My kids have a tendency to crash the server by blowing up large stacks of TNT in a cave or other crazy things they see on YouTube, and that tends to crash the binary. This saves me a little headache by restarting the process.

Prerequisites

Before we begin, you’ll want to have a couple of prerequisites. These are:

  • The latest copy of HashiCorp’s Packer tool installed on your build machine
  • The latest copy of the Debian 11 NetInstall ISO
  • OVFTool

There are a couple of files that you should edit to match your environment before you attempt the build process these are:

  • Debian.auto.pkrvars.hcl – variables for the build process
  • debian-minecraft.pkr.hcl file – the iso_paths line includes part of a hard-coded path that may not reflect your environment, and you may want to change the CPUs or RAM allocated to the VM.
  • Preseed.cfg file located in the HTTP folder: localization information and root password

This build process uses the Packer vsphere-iso build process, so it talks to vCenter. It does not use the older vmware-iso build process.

The Appliance Build Process

As I mentioned above, I use Packer to orchestrate this build process. There is a Linux shell script in the public GitHub repo called build.sh that will kick off this build process.

The first step, obviously, is to install Debian. This step is fully automated and controlled by the preseed.cfg file that is referenced in the packer file.

Once Debian is installed, we copy over a default Bash configuration and our init-script that will run when the appliance boots for the first time to configure the hostname and networking stack.

After these are files are copied over, the Packer build begins to configure the appliance. The steps that it takes are:

  • Run an apt-get update & apt-get upgrade to upgrade any outdated installed packages
  • Install our system utilities, including UFW
  • Configure UFW to allow SSH and enable it
  • Install VMware Tools
  • Set up the Repos for and install PowerShell and the Temurin OpenJDK
  • Configure the rc.local file that runs on first boot
  • Disable IPv6 because Java will default to communicating over IPv6 if it is enabled

After this, we do our basic Minecraft setup. This step does the following:

  • creates our Minecraft service user and group
  • sets up our basic folder structure in /opt/Minecraft
  • downloads MCRCON into the /opt/Minecraft/tools/mcrcon directory.
  • Copy over the service file and scripts that will run on first boot

The last three steps of the build are to run a cleanup script, export the appliance to OVF, and create the OVA file with the configurable OVF properties. The cleanup script cleans out the local apt cache and log files and zeroes out the free space to reduce the size of the disks on export.

The configurable OVF properties include all of the networking settings, the root password and SSH key, and, as mentioned above, the configurable options in the Minecraft server.properties file. OVFTool and William Lam’s script are required to create the OVA file and inject the OVF properties, and the process is outlined in this blog post.

The XML file with the OVF Properties is located in the postprocess-ova-properties folder in my GitHub repo.

The outcome of this process is a ready-to-deploy OVA file that can be uploaded to a content library.

First Boot

So what happens after you deploy the appliance and boot it for the first time.

First, the debian-init.py script will run to configure the basic system identity. This includes the IP address and network settings, root password, and SSH public key for passwordless login.

Second, we will regenerate the host SSH keys so each appliance will have a unique key. If we don’t do this step, every appliance we deploy will have the same SSH host keys as the original template. This is handled by the debian-regeneratesshkeys.sh script that is based on various scripts that I found on other sites.

Our third step is to install and configure the Minecraft server using the debian-minecraftinstall.sh script. This has a couple of sub-steps. These are:

  • Retrieve our Minecraft-specific OVF Properties
  • Call our PowerShell script to download the correct Minecraft server version to /opt/Minecraft/bin
  • Initialize the Minecraft server to create all of the required folders and files
  • Edit eula.txt to accept the EULA. The server will not run and let users connect without this step
  • Edit the server.properties file and replace any default values with the OVFProperties values
  • Edit the systemd file and configure the firewall to use the Minecraft and RCON ports
  • Reset permissions and ownership on the /opt/Minecraft folders
  • Enable and start Minecraft
  • Configure our Cron job to automatically install system and Minecraft service updates

The end result is a ready-to-play Minecraft VM.

All of the Packer files and scripts are available in my GitHub repository. Feel free to check it out and adapt it to your needs.

Minimal Touch VDI Image Building With MDT, PowerCLI, and Chocolatey

Recently, Mark Brookfield posted a three-part series on the process he uses for building Windows 10 images in HobbitCloud (Part 1, Part 2, Part 3). Mark has put together a great series of posts that explain the tools and the processes that he is using in his lab, and it has inspired me to revisit this topic and talk about the process and tooling I currently use in my lab and the requirements and decisions that influenced this design.

Why Automate Image Building?

Hand-building images is a time-intensive process.  It is also potentially error-prone as it is easy to forget applications and specific configuration items, requiring additional work or even building new images depending on the steps that were missed.  Incremental changes that are made to templates may not make it into the image building documentation, requiring additional work to update the image after it has been deployed.

Automation helps solve these challenges and provide consistent results.  Once the process is nailed down, you can expect consistent results on every build.  If you need to make incremental changes to the image, you can add them into your build sequence so they aren’t forgotten when building the next image.

Tools in My Build Process

When I started researching my image build process back in 2017, I was looking to find a way to save time and provide consistent results on each build.  I wanted a tool that would allow me to build images with little interaction with the process on my part.  But it also needed to fit into my lab.  The main tools I looked at were Packer with the JetBrains vSphere plugin and Microsoft Deployment Toolkit (MDT).

While Packer is an incredible tool, I ended up selected MDT as the main tool in my process.  My reason for selecting MDT has to do with NVIDIA GRID.  The vSphere Plugin for Packer does not currently support provisioning machines with vGPU, so using this tool would have required manual post-deployment work.

One nice feature of MDT is that it can utilize a SQL Server database for storing details about registered machines such as the computer name, the OU where the computer object should be placed, and the task sequence to run when booting into MDT.  This allows a new machine to be provisioned in a zero-touch fashion, and the database can be populated from PowerShell.

Unlike Packer, which can create and configure the virtual machine in vCenter, MDT only handles the operating system deployment.  So I needed some way to create and configure the VM in vCenter with a vGPU profile.  The best method of doing this is using PowerCLI.  While there are no native commandlets for managing vGPUs or other Shared PCI objects in PowerCLI, there are ways to utilize vSphere extension data to add a vGPU profile to a VM.

While MDT can install applications as part of a task sequence, I wanted something a little more flexible.  Typically, when a new version of an application is added, the way I had structured my task sequences required them to be updated to utilize the newer version.  The reason for this is that I wasn’t using Application Groups for certain applications that were going into the image, mainly the agents that were being installed, as I wanted to control the install order and manage reboots. (Yes…I may have been using this wrong…)

I wanted to reduce my operational overhead when applications were updated so I went looking for alternatives.  I ended up settling on using Chocolatey to install most of the applications in my images, with applications being hosted in a private repository running on the free edition of ProGet.

My Build Process Workflow

My build workflow consists of 7 steps with one branch.  These steps are:

  1. Create a new VM in vCenter
  2. Configure VM options such as memory reservations and video RAM
  3. GPU Flag Only – Add a virtual GPU with the correct profile to the VM.
  4. Identify Task Sequence that will be used.  There are different task sequences for GPU and non-GPU machines and logic in the script to create the task sequence name. Various parameters that are called when running the script help define the logic.
  5. Create a new computer entry in the MDT database.  This includes the computer name, MAC address, task sequence name, role, and a few other variables.  This step is performed in PowerShell using the MDTDB PowerShell module.
  6. Power on the VM. This is done using PowerCLI. The VM will PXE boot to a Windows PE environment configured to point to my MDT server.

Build Process

After the VM is powered on and boots to Windows PE, the rest of the process is hands off. All of the MDT prompts, such as the prompt for a computer name or the task sequence, are disabled, and the install process relies on the database for things like computer name and task sequence.

From this point forward, it takes about forty-five minutes to an hour to complete the task sequence. MDT installs Windows 10 and any drivers like the VMXNET3 driver, install Windows Updates from an internal WSUS server, installs any agents or applications, such as VMware Tools, the Horizon Agent, and the UEM DEM agent, silently runs the OSOT tool, and stamps the registry with the image build date.

Future Direction and Process Enhancements

While this process works well today, it is a bit cumbersome. Each new Windows 10 release requires a new task sequence for version control. It is also difficult to work tools like the OSDeploy PowerShell scripts by David Segura (used for slipstreaming updated into a Windows 10 WIM) into the process. While there are ways to automate MDT, I’d rather invest time in automating builds using Packer.

There are a couple of post-deployment steps that I would like to integrate into my build process as well. I would like to utilize Pester to validate the image build after it completes, and then if it passes, execute a shutdown and VM snapshot (or conversion to template) so it is ready to be consumed by Horizon. My plan is to utilize a tool like Jenkins to orchestrate the build pipeline and do something similar to the process that Mark Brookfield has laid out.

The ideal process that I am working towards will have multiple workflows to manage various aspects to the process. Some of these are:

1. A process for automatically creating updated Windows 10 ISOs with the latest Windows Updates using the OSDeploy PowerShell module.

2. A process for creating Chocolatey package updates and submitting them to my ProGet repository for applications managed by Chocolatey.

3. A process to build new images when Windows 10 or key applications (such as VMware Tools, the Horizon Agent, or NVIDIA Drivers) are updated. This process will ideally use Packer as the build tool to simplify management. The main dependency for this step is adding NVIDIA GRID support for the JetBrains Packer vSphere Plug-in.

So this is what I’m doing for image builds in my lab, and the direction I’m planning to go.

VDI in the Time of Frequent Windows 10 Upgrades

The longevity of Windows 7, and Windows XP before that, has spoiled many customers and enterprises.  It provided IT organizations with a stable base to build their end-user computing infrastructures and applications on, and users were provided with a consistent experience.  The update model was fairly well known – a major service pack with all updates and feature enhancements would come out after about one year.

Whether this stability was good for organizations is debatable.  It certainly came with trade-offs, security of the endpoint being the primary one.

The introduction of Windows 10 has changed that model, and Microsoft is continuing to refine that model.  Microsoft is now releasing two major “feature updates” for Windows 10 each year, and these updates will only be supported for about 18 months each.  Microsoft calls this the “Windows as a Service” model, and it consists of two production-ready semi-annual release channels – a targeted deployment that is used to pilot users to test applications, and a broad deployment that replaces the “Current Branch for Business” option for enterprises.

Gone are the days where the end user’s desktop will have the same operating system for it’s entire life cycle.

(Note: While there is still a long-term servicing branch, Microsoft has repeatedly stated that this branch is suited for appliances and “machinery” that should not receive frequent feature updates such as ATMs and medical equipment.)

In order to facilitate this new delivery model, Microsoft has refined their in-place operating system upgrade technology.  While it has been possible to do this for years with previous versions of Windows, it was often flaky.  Settings wouldn’t port over properly, applications would refuse to run, and other weird errors would crop up.  That’s mostly a thing of the past when working with physical Windows 10 endpoints.

Virtual desktops, however, don’t seem to handle in-place upgrades well.  Virtual desktops often utilize various additional agents to deliver desktops remotely to users, and the in-place upgrade process can break these agents or cause otherwise unexpected behavior.  They also have a tendancy to reinstall Windows Modern Applications that have been removed or reset settings (although Microsoft is supposed to be working on those items).

If Windows 10 feature release upgrades can break, or at least require significant rework of, existing VDI images, what is the best method for handling them in a VDI environment?

I see two main options.  The first is to manually uninstall the VDI agents from the parent VMs, take a snapshot, and then do an in-place upgrade.  After the upgrade is complete, the VDI agents would need to be reinstalled on the machine.  In my opinion, this option has a couple of drawbacks.

First, it requires a significant amount of time.  While there are a number of steps that could be automated, validating the image after the upgrade would still require an administrator.  Someone would have to log in to validate that all settings were carried over properly and that Modern Applications were not reinstalled.  This may become a significant time sink if I have multiple parent desktop images.

Second, this process wouldn’t scale well.  If I have a large number of parent images, or a large estate of persistent desktops, I have to build a workflow to remove agents, upgrade Windows, and reinstall agents after the upgrade.  Not only do I have to test this workflow significantly, but I still have to test my desktops to ensure that the upgrade didn’t break any applications.

The second option, in my view, is to rebuild the desktop image when each new version of Windows 10 is released.  This ensures that you have a clean OS and application installation with every new release, and it would require less testing to validate because I don’t have to check to see what broke during the upgrade process.

One of the main drawbacks to this approach is that image building is a time consuming process.  This is where automated deployments can be helpful.  Tools like Microsoft Deployment Toolkit can help administrators build their parent images, including any agents and required applications, automatically as part of a task sequence.  With this type of toolkit, and administrator can automate their build process so that when a new version of Windows 10 is released, or a core desktop component like the Horizon or XenDesktop agent is updated, the image will have the latest software the next time a new build is started.

(Note: MDT is not the only tool in this category.  It is, however, the one I’m most familiar with.  It’s also the tool that Trond Haavarstein, @XenAppBlog, used for his Automation Framework Tool.)

Let’s take this one step further.  As an administrator, I would be doing a new Windows 10 build every 6 months to a year to ensure that my virtual desktop images remain on a supported version of Windows.  At some point, I’ll want to do more than just automate the Windows installation so that my end result, a fully configured virtual desktop that is deployment ready, is available at the push of a button.  This can include things like bringing it into Citrix Provisioning Services or shutting it down and taking a snapshot for VMware Horizon.

Virtualization has allowed for significant automation in the data center.  Tools like VMware PowerCLI and the Nutanix REST API make it easy for administrators to deploy and manage virtual machines using a few lines of PowerShell.   Using these same tools, I can also take details from this virtual machine shell, such as the name and MAC address, and inject them into my MDT database along with a Task Sequence and role.  When I power the VM on, it will automatically boot to MDT and start the task sequence that has been defined.

This is bringing “Infrastructure as Code” concepts to end-user computing, and the results should make it easier for administrators to test and deploy the latest versions of Windows 10 while reducing their management overhead.

I’m in the process of working through the last bits to automate the VM creation and integration with MDT, and I hope to have something to show in the next couple of weeks.

 

Using the Right Tool for the Job #VDM30in30

At the last Wisconsin VMware User Group meeting, there was a spirited, yet friendly, discussion between one of the leaders and a VMware SE whether people should use vCenter Orchestrator or PowerCLI.  The discussion focused on which was better to learn and use for managing and automating vSphere environments.

That conversation got me thinking about what the “right tool” is and how to select it.

So what is the right tool?  Is it Orchestrator?  PowerCLI?  Or something else?

As with anything else in IT, the answer is “It Depends.”   The automation engineer’s toolbox has grown significantly over the years, and before you can really answer that question, you need to understand what tasks you’re trying to accomplish and the capabilities of the different tools.

Not all automation tools are intended to be used the same way, and using one does not preclude using another to supplement it.  vCenter Orchestrator, for instance, is a workflow automation tool with a number of canned workflows for handling routine tasks in vSphere, and it is the underlying automation engine for the vCloud vRealize Automation Center.  But it also includes plugins for interfacing with and/or managing other systems – including PowerShell scripts on other hosts.  It is great for tasks that may be run multiple times, either on-demand or scheduled, but it may not be the right fit for quickly automating a one-off task.

PowerCLI, on the other hand, is based on PowerShell.  It is a great command line shell with powerful scripting capabilities.  Like Orchestrator, it is extensible, and Microsoft and other vendors have released their own PowerShell modules.  This allows an administrator to automate a large number of 3rd-party systems from a single shell.  But while it has some workflow capabilities and even a configuration management tool in Desired State Configuration, it isn’t necessarily the best tool for large scale orchestration or providing the front or middle tiers for large scale batch processing, enterprise self-service, or orchestration.

These are just two examples of the tools that you can pick from when automating your environment.  In the last couple of years, VMware has significantly expanded the number of scripting languages that they support by releasing additional SDKs for a variety of programming and scripting languages.

I should point out that neither of the two camps in this conversation were wrong.  An admin should ideally know of multiple tools and when to use them to maximum effect to solve a problem.

How I Use PowerCLI and vCenter Orchestrator

My answer to the question of “PowerCLI or vCenter Orchestrator” is best summed up by this classic meme:

Why Not Both

A lot of the automation that I’ve done at $Work has used both PowerShell, including PowerCLI, and vCenter Orchestrator working together to accomplish the tasks.

I use PowerShell to do almost all of the heavy lifting against a variety of systems including Active Directory, Exchange, vCenter, and even Microsoft Online Services.  The scripts are usually less than 200 lines, and my goal was to follow something similar to the Unix Design Philosophy where each script specializes in one specific task.  This, in my opinion, makes it easier to modify, add new features, or reuse code in other jobs.

I use vCenter Orchestrator for three things.  The first is to tie together the various scripts into workflows.  The second is to act as a job scheduling agent since I don’t have something like UC4 or Tidal in my environment.  The third is to enable me to offload tasks to developers or the help desk without having to give them any additional rights on other systems.

By utilizing the strengths of both Orchestrator and PowerShell, I’m able to accomplish more than just relying on one over the other.