Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewrite to use websockets. #10

Open
pmfirestone opened this issue Feb 28, 2024 · 3 comments
Open

Rewrite to use websockets. #10

pmfirestone opened this issue Feb 28, 2024 · 3 comments

Comments

@pmfirestone
Copy link

pmfirestone commented Feb 28, 2024

I've become convinced that the basic problems with the WebRTC/PeerJS implementation are not going to go away. Though this version has the advantage of permitting the entire application to be served statically, without any work for a server, it is simply too difficult to reliably guarantee connections between arbitrary peers across the network.

Without doing too much damage to the rest of the program, I've changed the underlying system to use websockets instead. This requires a dedicated server program to forward requests between clients. This has the disadvantage of requiring a server somewhere, but the current deployment is so lightweight and receives so little traffic that I've only burned through 1% of my free $5 from fly.io. I'm happy to foot the bill for $5/month for this, frankly.

The primary changes to the client were in the functions usePlayerConnection and useDmConnection. These were entirely rewritten to handle a new set of rules for sending and receiving messages. I also moved the warden's logic for keeping track of which players are connected into the server, which required adding a new message type for the server to tell the warden which players are connected.

For more details about the proposed changes, see the modified client, and server, at:

I am not formatting this as a pull request since the changes are at the end of a long series of miscellaneous fixes and improvements made for personal use. Before preparing a clean pull request, I'd like to ask your input and opinion. I can confirm that this resolved some difficult connection issues in my case and would like to invite other people to test this new version.

PS: this version is deployed at https://mothership-assistant.surge.sh

@sbergot
Copy link
Owner

sbergot commented Feb 29, 2024

So this is something I had considered (and somewhat implemented in an old project) but I really wanted to avoid any kind of infrastructure. The reason isn't the cost but rather the maintainability & monitoring required.

@sbergot
Copy link
Owner

sbergot commented Feb 29, 2024

How do you see this test? do you want to invite people on https://mothership-assistant.surge.sh/ or that we deploy it on https://mothership-assistant-canary.wandering-mushroom.com/ ?

@pmfirestone
Copy link
Author

It's up to you: I'm deploying to that URL because I'm using surge and that URL is allocated for free by them. I'm not sure what your system is at your URL, but you're welcome to deploy there. For now the server is only accepting requests from the surge.sh URL, but it'd be trivial to add wandering-mushroom.com to the list of acceptable URLs.

About the maintenance and monitoring: the logic of the server here is very simple -- it pretty much just relays client messages and sends view messages to all clients -- but I see what you mean about adding continuous monitoring. It seems to me that a) the system works much more reliably and is logically less complex than the p2p setup (fewer special cases, less can go wrong, negotiating connections is more reliable), and b) that monitoring deployment on the backend servers is not so much more labor than deploying the static frontend. Indeed, that's why I made the choice to deploy it as a container on someone else's infrastructure rather than spin up a server from scratch. I'm certainly not looking to get into the weeds of running a reverse proxy through nginx or whatever: that'd be way too much.

Seeing as I haven't been able to get the p2p version of the client to work reliably, and this one works without fault for me and my players, I have every intention of continuing to use it. One of my priorities is to improve the health checks on the server container and clean up the testing for it, so fingers crossed that it'll pretty much monitor itself. Especially since there aren't really any changes to be made to its interface (though there are some changes in the data structures and request routing that'll be necessary to scale up past a single VM) it seems to be relatively straight forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants