You are viewing archived messages.
Go here to search the history.

Stefan Lesser 2024-05-20 16:05:49

This is one of those questions where I don’t really know yet how to ask it, so let me fumble and handwave a little bit and see where this goes:

In computing history we went from printers to screens, and on those screens from a brief stint with vector graphics to bitmap displays, and on those bitmap displays from text mode to frame buffer, and in those frame buffers from sprites and blitting to rasterization and compositing. In the early days, when there wasn’t enough RAM for a full-screen bitmap frame buffer, character glyphs and sprites were brought in from ROM. Now we have so much memory that we have double-/triple-buffering and realtime compositing of separately stored textures that often exceed the number of screen pixels available by an order of magnitude or more.

I’m particularly interested in the early transition to raster graphics. At some point (and I assume that was probably with PostScript?) it became feasible to compute graphics on the fly instead of having them prepared as bitmaps in ROM or on disk. If I remember this correctly, PostScript was invented because due to the different font sizes it was more economical to ship instructions to generate glyphs on the fly on the printer than to ship all possible glyphs as bitmaps in all the different font sizes.

In a way we went from a “final” representation of a map of bits restricted to a certain grid size to an “intermediate” representation of instructions that have to be executed to generate the final map of bits in the desired size. Alternatively, we could see that as swapping space (memory) for time (compute).

Are you aware of any papers or other material that compares both sides of this transition?

For instance in terms of performance in space and time, ie. how much compute is needed for how much memory saved. Or in the broader sense of how we settled on certain graphics primitives, because they were cheap enough to implement in terms of compute, and how we settled on certain data formats, because they were small enough in terms of memory usage, so that this trade-off made sense.

Stefan Lesser 2024-05-20 16:21:30

I guess a different way of describing what I’m looking for is this: Today, graphics primitives are a “solved problem” and part of any stack that you could possibly use (except maybe for low-level systems programming).

Back in those days I’m referring to, we were just figuring all that out. I’m looking for any kind of material that exposes the thinking processes that lead to the graphics stacks we use today, but also surface some of the abandoned (or never really investigated) ideas of what could’ve been.

Personal Dynamic Media 2024-05-20 16:28:58

You may find Principles of Interactive Computer Graphics by Newman and Sproull interesting. It may not go into all of the comparisons and trade-offs that you are discussing, but it certainly has discussions relevant to questions about how and why we do things certain ways.

archive.org/details/principlesofinter00newm/mode/1up

Kartik Agaram 2024-05-20 17:14:15

cohost.org/mcc/post/1406157-i-want-to-talk-about-webgpu from last year sticks out in my mind as having really great historical context.

Paul Tarvydas 2024-05-20 18:49:03

I can't directly answer your question, but, if I were trying to answer this question, I would look at:

  • early issues of Siggraph
  • early issues of ACM ToG (Transactions on Graphics)
  • early literature from Apple re. Apple postscript printers (a key ingredient to Apple's success & $$$)
  • early literature from Adobe re. postscript
  • early literature from Sun Microsystems. re. NeWS (capable of rendering circular windows)
  • I remember the name Alain Fournier being associated with the transition from clunky graphics to realistic graphics (did he say anything of interest to the above question?)

FWIW:

  • "vector" beams came from the way that cathode ray tubes (oscilloscopes and TV) and raster scanning worked
  • I blame the use of fixed-sized grids of non-overlapping small bitmaps ("characters") on Gutenberg and the use of clay tablets and papyrus mixed in with the concepts of arrays of bits (i.e. digital memory)
  • rows and columns of input switches were used in IBM Selectric typewriters and keyboards which naturally led to the idea of rows and columns of core memory bits, dot-matrix printing and output pixels. 7-segment displays led to grids of pixels-on-a-chip, etc. The key being optimization of pin count by using row/column addressing instead of direct pixel addressing, the same optimization used in PLCs.
  • the epitome of row/column addressing was the invention of the CPU chip attached to Random Access Memory, IMO
  • I think that we're only beginning to use scalable vector graphics in the form of SVG and variants of XML

(... asking a more-knowledgable friend about this question ...)

Oleksandr Kryvonos 2024-05-24 12:23:03

Does anyone know if there is somewhere a new implementation of something similar to GRAIL system from Rand corporation ? youtube.com/watch?v=2Cq8S3jzJiQ

hamish todd 2024-05-26 19:07:35

Unreal Blueprints is similar