OpenShift Container Platform (OCP) is capable of building and hosting applications. This includes old retro video games. One of the oldest and most popular retro FPS games DOOM released in 1993 has been containerized and brought into Kubernetes via a culmination of projects ending up in one called kubedoom. Like DOOM, Red Hat the home of OpenShift has also been around since 1993. So for this exercise I thought it would be cool bring their legacy together into a contemporary Fedora based image and run it on OpenShift. We’ll call this fork ocpdoom.
This requires you have a working instance of OpenShift 4 running before continuing. That could be any of the multitude of variants of OpenShift that are available to be utilized in just about any situation:
- OpenShift On-prem (Bare Metal, Red Hat Virtualization, VMware, OpenStack)
- OpenShift on the Cloud: AWS, Azure, Google Cloud, IBM Cloud
- Microshift
- OpenShift Local
Note you will need access to the OpenShift Command Line Interface or “oc
” tool.
Once you’re logged into your OpenShift cluster using oc
the process of building and deploying the "ocpdoom” image is made very simple.
- Create some OpenShift Projects in which DOOM and it’s monsters will reside by running the following command from a terminal shell:
oc new-project monsters
oc new-project ocpdoom
- We will now create a service account named
doomguy
, create a cluster role namedmonster-control
and assign it to him:
oc create serviceaccount doomguy -n ocpdoom
oc create role monster-control --verb=get,list,watch,delete --resource=pods -n monsters
oc create rolebinding --role monster-control --serviceaccount ocpdoom:doomguy -n monsters monster-control
- Create the ocpdoom application and build the image from source using oc new-app.
oc new-app https://github.com/OpenShiftDemos/ocpdoom.git --name=ocpdoom -n ocpdoom
If you would like to see the build in progress:
oc logs bc/ocpdoom -f -n ocpdoom
The above oc new-app
command did several things.
- Constructed and created a Deployment based on the contents in the specified Github repo.
- Spun up a build pod and built the ocpdoom image and then pushed it into the native OpenShift image registry
- Finally it attempts to deploy the image once it's present in the openshift registry.
Once the build is complete and the container is deployed you should see an output similar to this:
oc get pods -n ocpdoom
NAME READY STATUS RESTARTS AGE
ocpdoom-1-build 0/1 Completed 0 32m
ocpdoom-69c578bf87-4mjvx 0/1 Error 2 (28s ago) 117s
But why is the pod reporting an Error
state!? Let's investigate with oc logs:
oc logs -l deployment=ocpdoom -n ocpdoom
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:ocpdoom:default" cannot list resource "pods" in API group "" at the cluster scope
2023/03/28 14:50:09 The following command failed: "[kubectl get pods -A -o go-template --template={{range .items}}{{.metadata.namespace}}/{{.metadata.name}} {{end}}]"
It's because ocpdoom
is trying to get a list of all pods in all namespaces. OpenShift restricts project to project/namespace to namespace interaction out of the box. Here's where the doomguy
service account with his cluster role monster-control
come in.
Let's assign the newly created deployment the doomguy
service account:
oc set serviceaccount deployment ocpdoom doomguy -n ocpdoom
We can also narrow down the scope of where we want ocpdoom to focus by setting the NAMESPACE
environment variable in the deployment:
oc set env deployment ocpdoom NAMESPACE=monsters -n ocpdoom
Now check to see if your application pod is in a READY and Running state:
oc get pods -n ocpdoom
You should get an output similar to this:
NAME READY STATUS RESTARTS AGE
ocpdoom-1-build 0/1 Completed 0 44m
ocpdoom-74d97f4fbd-2h85d 1/1 Running 0 6s
You’re going to need some monsters. Or pods represented as Demons in this case. You’ll do this by deploying a simple little container:
oc new-app https://github.com/OpenShiftDemos/monster.git --name=monster -n monsters
Observe the build progress:
oc logs bc/monster -f -n monsters
oc get pods -n monsters
With an output similar to this:
NAME READY STATUS RESTARTS AGE
monster-1-build 0/1 Completed 0 4m46s
monster-5cf6c54d68-w6ctj 1/1 Running 0 64s
Optionally you can use oc scale to adjust the amount of monsters you would like:
oc scale deployment monster --replicas=2 -n monsters
NAME READY STATUS RESTARTS AGE
monster-1-build 0/1 Completed 0 7m9s
monster-5cf6c54d68-4pwrx 1/1 Running 0 6s
monster-5cf6c54d68-w6ctj 1/1 Running 0 3m27s
In order for us to access DOOM from outside of OpenShift we’re going to create a Kubernetes service using the oc expose command:
oc expose deployment/ocpdoom --port 5900 -n ocpdoom
Then we'll open up a connection to that service over the default VNC port (TCP/5900) we exposed using the oc port-forward:
oc port-forward deployment/ocpdoom 5900:5900 -n ocpdoom
Leave that connection up and running in the background and move on to the next section.
You can connect to ocpdoom via a browser - no software install!
Using oc
we build a container that contains noVNC
with environment variables (ENDPOINT, PORT, PASSWORD) that connect to ocpdoom service exposed above.
# create noVNC container
oc new-app https://github.com/codekow/container-novnc.git \
--name novnc \
-n ocpdoom \
-e ENDPOINT='ocpdoom' \
-e PORT='5900' \
-e PASSWORD='openshift'
We then expose the novnc service, or create a route, that allows ingress via tls on port 443 into OpenShift.
# create route
oc expose service \
novnc \
-n ocpdoom
# redirect http (80) to https (443)
oc patch route \
novnc \
-n ocpdoom \
--type=merge \
-p '{"spec":{"tls":{"termination":"edge","insecureEdgeTerminationPolicy":"Redirect"}}}'
ROUTE=$(oc -n ocpdoom get route novnc -o jsonpath='{.spec.host}')
echo 'Login via a browser at the link below - using "openshift" as the password'
echo "http://${ROUTE}"
The ocpdoom
container houses X11 and VNC servers to display and connect to the game. In order to connect to DOOM with a vnc client you can download and install the TigerVNC vncviewer
found here
Once downloaded open up the vncviewer
application and enter in <ip address>:5900
where the ip address is the host in which you're port-forwarding from. But make sure there is no firewall blocking access to TCP/5900 in-between you and the bastion host.
Or if the oc port-forward
was issued from your localhost just use localhost:5900
like so:
Example:
Click Connect
Enter Password "openshift" and click OK
Congratulations! You’re now playing Doom in a container within a Kubernetes pod on the OpenShift Container Platform accessing it all through a vnc server using vncviewer.
It should look something like this:
At this point you can run around and play the game using your keyboards arrow keys
to move, ctrl
shoot and space bar
to open doors.
Here’s where the monster pods come in. You’ll stumble upon an open field with monsters like this roaming around:
Those monsters represent the pods in your monsters namespace. If you shoot those monsters that kills them and the pod they represent. You can view this on the OpenShift side by opening up another terminal and running the following command:
watch oc get pods -n monsters
The monsters keep on respawing! Since oc new-app
also created a Kubernetes ReplicaSet it will respawn that missing pod monster even after being killed. So really the only way to get rid of all the monsters once and for all is through a non violent tactic via the command line. One way we can accomplish this is using the oc scale
command:
oc scale deployment monster --replicas=0 -n monsters
Once all the pods are Terminated all the active monsters in the field should vanish. But watch out there are other creatures lurking around! Enjoy 💥
You can see by this example how easy it is to build deploy legacy applications of all kinds with OpenShift. Hopefully you had fun in the process and found some inspiration. Visit developers.redhat.com/learn for more examples of using OpenShift and the wonderful echosystem that comes with it.
Enter the following codes while playing DOOM to unlock additional tricks:
idspispopd
- Let's you walk through walls. Get a little closer to the monsters!idkfa
- Then press5
. Use with caution!