A Partial Realization of a UDP Programming Challenge By chance and boredom, I came across this article, which I'll summarize in the following paragraphs: https://mas-bandwidth.com/writing-highly-scalable-backends-in-udp/ This article discusses concerns with ``scalable'' network systems, which typically means inefficient usage of available resources. For some reason, the usual path is to use the least efficient network protocol in existence, HTTPS. Such a system ``scales'' when massive resources are present to waste. The author goes on to claim using UDP for its one purpose to be ``out there'' and uses the following problem as an interview question, abbreviated: Implement a server for a UDP protocol in which client programs send many packets per second of one hundred octets each; the server returns the FNV-1a hash checksum digest of each. The difficulty of the problem is threefold: The Go programming language is required; use of a particular ``cloud'' infrastructure is mandated; and the Go server is intended to communicate with the infrastructure over HTTPS to get the result. I ignored these three annoyances. This was fun to write in Ada, as I've yet to make particular use of its tasking features, until now. My server uses an inlined FNV-1a function and a helper function to convert its result to octets, and has two parameters: port and optional task count. The server defines a simple task type without any parameters, and looses them, without doing anything else. Every task is an endless loop that gets a UDP packet, calculates its FNV-1a hash checksum digest, and sends it back to its origin. The server is very quiet, following in the proud UNIX tradition of not overwhelming the user with verbosity; if this server returns, then something went wrong, and a distinguished user will quickly see his error. I've no good way to test the performance of this server, but it must be sufficient, although I'd not be surprised to learn enough waste underneath prevents it from sufficing. A single processor should be able to handle many millions of such packets, although it likely won't have the entire processor, but a ``single'' machine nowadays has many processors that can work at once; clients may be expected to use the DNS to find the server, so putting multiple Internet addresses in the response would work to allow multiple such machines to bear the load, if needed. I've made no attempt to have my server try to learn how many processors are available, nor to assign tasks to each, but this could be done. Removing HTTPS is the obvious way to make something slow fast, and any serious design would do so to avoid the waste coming from a large infrastructure of what ought to be recognized as supercomputers. .