NEW VERSION OF FOREM BBS SOFTWARE
----------------------------------
A new release of FoReM ST arrived yesterday. Among the features is
yet another new file transfer protocol, 'ZZZMODEM.' This new protocol
transfes data in blocks of 16 Megabytes, giving it the largest block size
of any file transfer protocol in the Known Universe. The checksum for each
block in a ZZZMODEM transfer is sent via XMODEM, for greater accuracy.
"This new protocol will allow us to transfer data at rates up to one one-
hundredth of one percent FASTER than by any previous method," explained
Phil "Compu" Dweeb, a FoReM aficionado, pausing occasionally to wipe
the drool from his chin.
Industry insiders were quick to point out that using ZZZMODEM, it
takes roughly 2 hours and 25 minutes to transfer a 20K file at 19,200 baud.
Mr. Dweeb said that this problem has been dealt with. "Each block is padded
with nulls, which take no time to send," he explained.
The new version of FoReM ST also has the new "Recursive ARCing"
feature. As Mr. Dweeb explains: "All download files are recursively ARCed
by FoReM before being put online. Our experience has shown that when you
ARC a file, it gets smaller. Therefore, the approach we have taken is to
repeatedly ARC the file until it reaches a size of roughly 10K. At that
point, it's hardly worth the trouble, wouldn't you say?"
Reportedly in the works for a future release is the patented "One
Length Encoding" process. Early reports suggest that this procedure can
reduce the length of a file to just 1 bit. Mr. Dweeb takes up the story:
"One day we were sitting around doing some hacken and phreaken, and one of
us started thinking. All binary data is encoded into bits, which are
represented by ones and zeros. This is because a wire can either carry a
current or not, and wires can therefore be set up in a a series that can
represent strings of ones and zeros. "Notice, however, that the real
information is carried in the ones, since the others carry no current. I
mean, what good does a wire do when it isn't carrying any current? So by
dropping all the zeros, you can easily cut file sizes in half. So we
decided that a cool way to speed up data transfer would be to only send the
one bits. The results were phenomenal -- an average speed increase of 50%!!
"After we finished the initial implementation, we kept finding ways to
make the thing faster, and more efficient. But then we realised that we
hadn't gone all the way. If you think about it, after you drop all the
zeros, you're left with a string of ones. Simply count all the ones, and
you're left with another binary string. Say you end up with 7541 ones. In
binary, that's 1110101110101. So immediately we've reduced the number of
bits from 7541 to 13. But by simply repeating the process, we can reduce it
further. 1110101110101 becomes 111111111, or 9, which is 1001, which be-
comes 2, which is 10, or 1.
Once we reach a string length of 1, we have
reached maximum file com-pression. We now have the capability to encode
virtually unlimited amounts of information into a single digit! Long-
distance bills will never be the same! "Now, that's not to say that there
aren't a few problems. The biggest one we have encountered is that for some
reason, there seems to be a certain amount of data loss during the re-
conversion process. It seems that sometimes the file cannot be expanded
into its original form. So, the solution we came up with was to have an
encryption key associated with each file. When a One Length Encoded file is
received and is undergoing decompression, the unique encryption key must be
supplied. That way, we end up with a 100% success rate in our conversions!
"A problem which we are having difficulty resolving lies in the fact
that to ensure a 100% success rate, the encryption key must be exactly as
long as the original file. We are confident, however, that the use of our
Recursive ARCing procedure will help to solve this problem..."