Now I feel old. I remember when this was new and clever. Made downloading from BBS's much nicer over XModem. At 2400 baud or whatever was current then!
Yup, the first problem with Xmodem was that it transferred sectors (as in CP/M) and thus file size was always padded with zeros to a multiple fof 128 bytes.
Then came Ymodem which fixed this, and then Zmodem which was considerably faster (windowing protocol) and allowed resuming interrupted transfers and handled CRC checksums.
The resume was the clincher. You'd spent over an hour getting 3/4 the way through downloading an enormous 1M file to drop to line noise. OK, start again...
Saving 400+ for a Courier HST to go an insanely fast 12k (very rarely). They beat everything else at holding a line though.
Saving for a Zyxel and get 16800 :)
Lots of good memories for the Elite 2864ID (ISDN) and U-1496E.
I had a project ~15 years ago with several thousand FAX Clients, and interestingly we had problems with every hardware card or modem we used except the (even then quite old) U-1496E which worked without flaw, whatever customer send data.
As soon as you saved and bought, they announced 56k! Fun times. :p
Talking of modems, a place I worked around '90 had racks full of modems for customer dial-in. What never occurred to me until I saw it was each one needed a phone line - half the back wall was covered with phone points!
Some years later a project was trying to do caller ID to computer. Simple idea, ridiculous number of issues. We found one modem that actually talked UK CLI of the dozens that claimed to. UK Cable spoke US CLI.
There were a bunch of different something-Modems around that time that fiddled around with window sizes, retransmission approaches, batch transmission, and so forth. In case anyone is wondering, there was a YModem (also developed by Forsberg) in addition to XModem and ZModem along with various variants.
I even interned for a small company one summer that implemented one of these for a timesharing service using Prime minis.
I got my first modem in 1990 or thereabouts. For some reason there was still a lot of XModem use back then, but it was gradually replaced by ZModem.
Some years later, just before BBSes were obsoleted by the internet, several multiplexing protocols appeared which let you chat or play a MUD while downloading or even upload and download at the same time.
Can't remember any names, but these protocols were only popular for a very brief time before the internet took over.
I also recall some BBSes survived a while into the internet age by adding telnet ports. You ran your normal dialer software with a telnet driver that took the hostname as the phone number and then would end up running ZModem and the likes over telnet.
I remember when we did our own zmodem implementation so we could bypass download limits. Large BBS' at the time had restrictions on how many KB you could download in a day, but they only counted completed file transfers. ZModem had a provision for restarting failed transfers, so it was normal for a transfer to fail before completion (call waiting!) and the user to come back later to finish it. Our implementation could receive the last packet successfully, then push the transfer back a few blocks (retries) and fail.
It's still hard to match the convenience of moving files to and from a server with RZ/SZ.
If your terminal emulator supports it, it will see the trigger sequence emitted and start the modem protocol over your ssh connection and save the file.
Since it runs over your current ssh connection it's perfectly secure.
It's so much easier then switching to a local terminal to use scp to get or put a local file. Sure, I know it's not the most efficient protocol but with the speed of todays networks the kind of files you're dealing with when doing administration or development are more quickly transferred by not having to do the context switch.
If there's a mac terminal that will do scp that has shell integration on the host. I'd switch to that, but I haven't found one. Is there?
Apparently iTerm2 can be used to setup lrzsz coupling [1]. I haven't tried it yet myself, but I'm going to! I really hope this works. It's such a pain to setup scp back to my workstation that if I need to do any real back and forth, I've been known to setup a reverse SSH tunnel just for getting scp to work. But even then, it's a pain. Zmodem would be so much easier (and make me a little nostalgic inside every time I used it).
I still use sz/rz on an almost daily basis, for doing embedded system development. The system I work with is only accessible over UART, so rz/sz is the only way to transfer files to the host PC (well, the only convenient way; the other way is through JTAG, which is a PITA, and only useful for imaging, not just transferring some small data file from the running system).
Same here. UART isn't always the only option on my platforms, but if Ethernet, WiFi, or USB gadget is unavailable for whatever reason, sz/rz (from Linux) or loady (ymodem, from U-Boot) is usually the next best choice. Minicom makes using these fairly convenient.
Looking b ack at stuff like Zmodem, it's instructive to compare the computing culture then and now. Back then, everything was so cryptic.
Documentation was hard to find printed books, code and config was extremely abbreviated ( sometimes for legitimate technical reasons of storage capacity). It was all very off-putting unless you put w lot of effort into research, or ran in the social circles that already had data and knowledge.
Nowadays we have an embarassment of riches of information and community and expressiveness and usability (when the APIs aren't locked down by DRM and trade secrets)
The other thing about coding back in those days is, if you got stuck, there was no stack overflow. You had to figure it out yourself or maybe ask a small grpup of people on a BBS or at a user group. Also, there was hardly any open source so things that seem trivial now like persistance or fast data structures were considerable efforts.
The most striking thing to me was there was a huge amount of free-as-in-beer software powering the entire BBS "network" and lots of helpful communities but almost nothing was open source.
From what I could tell, it took a long time for the academic tradition of making the source available to work its way into the non-academic world. The academics were using UNIX and other mainframe/minicomputer systems, but non-academics at home didn't have much access to this stuff unless they got themselves an account somewhere they could access through modem. So everyone else was "raised" in the PC tradition, which came from MS-DOS and the 8-bit micros before it. There, there was a bunch of "shareware" and a bunch of freeware available on BBSes, but the whole idea of passing around source code so everyone could benefit or make modifications just hadn't even come up. I grew up on BBSes and the idea of having source code just never even occurred to me honestly, until I started hearing about it when Linux became popular in the early 90s. The first real open-source stuff (which was really Free software), which pushed it as a philosophy, had to be from RMS and his GNU project, but I never even heard of that until Linux came along, and I think it was mostly confined to the insular academic world before this. When Linux came along, the GNU project finally had a kernel and could be put together into an actual working OS, and at the very same time, the internet was really catching on outside the academic world (this is about 1992 remember), and the WWW was only a couple years away. I'd say it was the perfect trifecta: the already-existing GNU software and tools (glibc, gcc, etc. are really important for having a working OS and building stuff), the Linux kernel, and the new-found accessibility of the internet allowing global communication and collaboration.
It's kind of weird how certain computing platforms that died around this time period have frozen in time this aspect of the modern "sustainment" movements. For example, the Amiga scene these days is full of high-effort oddball software and hardware projects that are closed source and for sale (and likely to earn their creators literally dozens of sales).
You would have needed to get hold of a compiler then. Of the specific compiler that was used to compile that source, probably. A few hundred $ here and there, and hope it would compile on your machine.
Another thing is that once you logged onto a bbs, that is what you would be doing until you logged off.
These days my computer is logged onto a number of computers via various protocols all at the same time (especially if you torture the idea a bit and consider each web site open an anonymous login).
Kermit was in the same space as Zmodem. I remember it fondly, as they were very progressive around "open source" way back in the early 90's, and probably earlier. They accepted what was essentially a "pull request" over USENET discussions forums from me to add a feature.
I did put "open source" in quotes for that reason. At the time, it what somewhat novel to be able to get the source code at all, and even more novel that the writers would accept code from end users to put back into the product.
>At the time, it what somewhat novel to be able to get the source code at all
Really? IIRC, in Jason Scott's excellent BBS documentary (you should go watch it. Like, right now. It's that good), Thom Henderson (from SEA. Remember them? I don't, because I wasn't alive. Which is why I have to rely on documentaries) said that it was expected in UNIX circles, which is where Kermit originated.
Of course, in the early days of unix, you got the OS in source form, but those were long gone by then. AFAIK, a lot of independently developed utilities were distributed in source form, in part because it was common to have to edit the sources and compile yourself if you weren't on the exact same architecture as the author. But that's just what I've heard.
I remember receiving a 2400' tape for our VAX 11/750 from Lancaster University (UK) containing a full dump of their Kermit repo. Having pulled off the relevant VAX/VMS, PC and various other program formats, we had a way to transfer files between a number in-house development systems. This would have been about 1983.
Who remembers IceZmodem? A backwards compatible protocol that allowed you to play Tetris or listen to .MOD files while downloading... and if you were downloading from an IceZmodem capable host, you'd even get increased download speeds!
Hey, I recognized all of that software save ZMODEM, and I'm not out of highschool yet! Then again, I only know about it because I read The Cuckoo's Egg, and am totally obsessed with computer history.
But I think most of us would recognize baud, even if we've never worked with anything that you would measure in it.
Fun times. My first experience with modem file downloads was the ASCII Express protocol on my old Apple //. Cutting my teeth on modem init strings, and keeping current with file protocols helped me break into IT despite not having a degree in it.
For example: "What, your Xmodem transfer always fails on the same block... oh, you most have xon/xoff flow control turned on. Try swapping that &K4 with an &K3 instead."
At one wacky point in my career, I ended up supporting a program "SPC" (software protocol converter), that flipped zmodem over an X.25 network to SNA to a mainframe. It ran on an OS2/Warp box. I wish I was making that up. I really do.
It was a layer over protocols like zmodem that rendered images as they downloaded. Think of that for a second. Modems were so slow that you could watch the image line by line render to decide if you wanted to continue on with the download. Yes they were porn images.
GifLink Filename: GIFLK112.ZIP Price: $30
A Zmodem that allows you to view GIF images while you download. Maybe a nice
feature if you really spend hours and hours downloading GIFs.
I'm not sure when that price was active. (That text has a copyright of 1994 to 1997).
I remember the "porn" images from the BBS days. They were pretty laughable really, usually just pictures of naked women, frequently showing only their bare breasts and obscuring the rest. I think they were frequently just scans from Playboy, which was rather tame.
Compare to these days, where there's all kinds of crazy hard-core stuff available for free with a quick Bing search, in both still-photo and video.
Kermit's design accommodated connections that zmodem couldn't (half-duplex, connections that don't let one send some control characters. seven bit connections, or connections between systems using ASCII and systems using EBCDIC), and the basic protocol was easy to write. You could use Kermit to send data between your C-64 and a Cray 1 or Burroughs Large system--and it was used to send data from the International Space Station to earth. Later versions supported sliding windows and large packets where possible and let one selectively quote characters, while still providing backwards compatibility with earlier versions.
My impression? Kermit worked (if slowly) when other protocols wouldn't. Back in college, when I dialed into the University and attempted to download files, Kermit was the only one that would work across the insane university network (you dialed into a DEC terminal server, then you would issue a connect command to the actual computer you wanted. You could attempt to get an 8-bit clean path by sending a break character to get back to the terminal server, issue another command, then resume the connection, but for some reason, that almost never worked in my case).
Surprised nobody mentioned BiModem in the comments (popular at least in Europe). Simultaneous up- and downloads, which was great to get keep those up/down ratios at their proper levels.
BBSes: Yet another technology I missed, because I wasn't alive. Ah well.
Speaking of things that I wasn't there for, did you know TinyTIM's still up? Sketch (perhaps better known around here as Jason Scott) hasn't actually killed it because it's so cheap to run. Mind, nobody's actually around anymore, but you can explore the relics of a once-great civilization over at http://www.tim.org/.
I remember trying out Zmodem early on, only to find that it obviously had some kind of bug in that it would reliably and consistently drop the line after a certain amount of data had been transferred.
Not so, of course - but unlike Xmodem and Ymodem which acknowledged every packet (and slowed things down), Zmodem would not constantly ack back to the host unless there was an error... So this triggered the modem's inactivity timeout, since despite the constant stream of data coming down from the remote system, there were no "keystrokes" or other packets travelling in the other direction. Hence, after X minutes... Line drop.
Easily sorted with a few AT commands, eventually. :)
When modeming into the university unix system, I really enjoyed being able to type `sz file` and bam the file is on my local machine, with progress meter and everything.
So I wrote rzh, "receive zmodem here". Navigate to where you want to receive files, run rzh (which gives you a subshell) then ssh anywhere. Bounce through six machines if you want. When you sz a file, rzh will receive it.
Just to add to the nostalgia, back in the day we used to use a protocol/app called BLAST[0] to grab data off of Data General mini-computers onto PC's back in the 80's. Hugely simple to use and damned reliable.
Zmodem's code is uglier than Kermit's IMO. Gods help you if you have to read either, but Kermit is written better and EK is almost understandable once you fold all unneeded #define's.
I'm surprised no-one mentioned the venerable UUCP. It has more variants than X/Y/Zmodem combined: at least 12.
I had a small "brush with greatness" in ... ~1984, or so? I was on Compuserve at 1200 baud, and although I forget the circumstance, I had a cserve "chat" with Ward Christensen; the inventor of Xmodem. I can't even remember what we talked about, but he was surprised that I knew who he was just by his name.
I ran TriBBS for years as a kid in the Maryland/DC metro area (301).
My OS of choice was OS/2 Warp. It could handle a full-time, two-node BBS and still allow me to do other tasks without skipping a beat -- all on a 486 with 8 MB of RAM!
I also remember being rather envious of how cool Renegade, WWIV and company looked. Not to mention the more exotic software powering the, er, rather more questionable boards: software with ominous names like ViSiON-X or Oblivion/2... But after trying them all, I eventually switched from TriBBS to PCBoard.
It was the right choice: I loved the flexibility of PCBoard's scripting language and C SDK. Adding functionality to my BBS is what prompted me to pony up for Borland's Turbo C++ and get serious about programming.
The sad thing is I'm stuck on a codebase that uses Xmodem for core operations and have to forgo such niceties as having file name, size, and timestamp automatically handled. I don't know why they didn't just implement Ymodem like they should have.
Screen[1] contains an implementation of zmodem (and is one of screen's advantages over tmux). I've used it before on VMs on which I had a serial console but no network connection or other way to get/put files.
Very cool! I didn't realize screen had this. I've always used sz/rz when needed. It's kludgy but comes in handy every so often. I have it installed on all my linux boxes for that special occasion.
Yup! It's only once every year or two that I've needed it, but damn is it a handy thing to have in your back pocket when nothing else works.
Last time I needed it, I also ended up using xxd to get the zmodem binary on to the device. Fun times, fun times. The device was buried in a concrete box under a frozen highway, and the Ethernet died, so we had to find a way...
Yes, Seeing the "ZMODEM" on the front page triggered some nostalgia... waiting for hours to download a few megabytes. At least ZMODEM supported resume!
Zmodem was the first big leap forward. Interesting how efficiency is baked into the brains of so many folks who used to have to think about every byte they might transmit or receive.
Now I feel old. I remember when this was new and clever. Made downloading from BBS's much nicer over XModem. At 2400 baud or whatever was current then!