#past-company
When the industry switched over from mainframes and minicomputers to personal computers, how big a regression was the performance of a typical PC versus that of a mainframe / mini at the time? (Keeping this question vague because I'm not quite sure how to phrase it. Feel free to interpret it however you like.)
This is anecdotal because I’m an Ives era iMac baby, but my understanding was that it sometimes was a speed up, because of resourcing sharing on many mainframes you couldn’t reliably get at all the power available.
Yeah! That's sort of what made me wonder. Like, the mainframe had to time share across a bunch of jobs submitted by users, so presumably things slowed down when the number of people submitting jobs went up. So there's, like, a relative improvement in perceived performance when switching to PC. (And, AFAIK, a reduction in per-user cost.)
But in absolute terms? Like, if you had time on the mainframe when nobody else was around… how much fun was that? Like, for how long after someone got a PC would they still be tempted to stay late and submit jobs to the mainframe because it was that much faster?
(I'm reminded of my early days of doing 3d animation in high school, where I got to use around a dozen computers in the lab as an overnight render farm — the inverse of timesharing on a mainframe!)
This is not an easy question to answer, because it depends a lot on what you want to do, and in particular where the performance bottlenecks are. The biggest performance difference was in mass storage. Processing datasets larger than core memory was common practice in mainframes, but impossible on the first PCs. But for interactive use on small tasks, the PC was the best choice from day one.
Anecdata: In 1983, my high school had a few Video Genie computers (Tandy TRS-80 clones, with a Z80 processor) and a few terminals for remote access to the IBM mainframe of a nearby research center. We wrote and ran games on the Z80, which we couldn't have done on the mainframe, where response times for a simple command were about a minute (I couldn't try at night!). On the other hand, I used the mainframe for doing my math homework, solving linear equations for six unknowns (in APL) with the same reponse time of a minute, which I found impressive. The next day, I decided that with that machine, I could go beyond the scale of my math homework. 50 unknowns, no problem - still one minute. The Z80 machine couldn't have handled those for lack of memory.
Echoing Konrad Hinsen, “it depends.” I went from 8-bit home machines to VAXes, and I was initially disappointed because there were no graphics on the DEC VT-100 terminals. But there was so much more memory (not to mention disk space!) available that I could compute more interesting things, even if I had to do it nearly blind. When personal computers started to get popular, they had ~640K of memory while VAXes could hold 128MB of RAM, so off hours compute jobs continued to run much better on the bigger machines. OTOH, the Sun-2/Sun-3 workstations had 8MB/up to 32MB of RAM, plus a super nice display (for the times), which often made it feel much better in practical terms.
I spent my early career (circa late 2000s) deploying to various AS/400s, which are midrange machines. Up-front caveat: I used them for the most boring business-y work: querying databases, moving save files around, modifying some job queues; no computationally intensive tasks. I also worked from a modern laptop and only used the greenscreen for specific tasks. However, I spent a lot of time with people for whom an AS/400 terminal was their primary workspace.
I crashed all of Toronto IBM with a simple APL program (graphics - bar charts using RBG primary colours, nonetheless (unheard of at the time)).
I crashed all of UofT computing with an assembler homework assignment (it took a few runs before they figured out that it was my fault).
I bought/owned a UNIX[sic] Nabu system. Before Linux was born.
Going from mainframe to Vax/780 to home-built S100 was essentially a speed + memory + disk + $’s issue. I graduated from cassette tapes to 8.5" floppies when I could afford it on my student allowance.
Yet, something about desktop PCs was /different/ from mainframes. Desktop PCs allowed one to think in terms of /many/ computers instead of /timesharing/ a behemoth. Then, came uucp, Napster, p2p, etc. I doubt that VisiCalc or Hypercard or blockchain or internet could have been conceived-of without the presence of cheap(er) home PCs.
To me, it’s not a question of brawn strength but of ubiquitous-ness.
To me, computing hardware in 2023, is very, very different from computing hardware in 1950-onwards.
IMNSHO, CompSci is out to lunch. “SICP Considered Harmful”??? The Reality of 2023 hardware is vastly different from the Reality of 1950 hardware.
"Like, if you had time on the mainframe when nobody else was around…" I never had time on the first mainframe that I used. I submitted my program on punch cards and got back a printout a week later, hopefully with the results rather than a syntax error. So I have no idea of how fast it was. I think that it was common for many mainframes to be batch oriented, so for them, the question doesn't even make sense. But when I got my 68000 Amiga computer, it had the same processor as the WiCat and SUN minicomputers I was using, but I had it all to myself. The big difference was that they had 16X as much memory and disk drives instead of floppies.