built minipov2, ideas on
Kragen Javier Sitaker
kragen at pobox.com
Sat Oct 21 17:54:45 EDT 2006
On Fri, 10 Mar 2006 23:19:26 +0100, Dave Long wrote:
> > - Bresenham's line-drawing algorithm is a first-order delta-sigma
> > converter. This isn't news to the world, but it was news to me.
> > When I use a first-order delta-sigma converter instead of the usual
> > PWM crap to drive the LEDs to fade, they flicker a lot less.
> > A kragen-hacks post is on the way.
> Anti-aliasing in time?
I don't think so --- although you could use this technique for much
slower anti-aliasing in time.
> Of course, whether PWM is crap or not depends upon your final time
> constant. If the output is too fast, even delta-sigma won't
> properly average out for reconstruction and one will see the digital
> PDM waveform instead.
Yes, but for a given temporal resolution and number of output levels,
PDM (pulse displacement modulation? Is that what this is called?)
gives you a higher frequency than PWM except at the extreme ends of
> Economically, if actors in markets are too responsive, this
> translates into boom-bust cycles.
I see that it's sort of analogous in that the market is, in theory,
seeking an equilibrium price; but are there other components to the
analogy? Does the positive feedback of "momentum investors" map to
something in a digital output driver? Does the Bresenham error
counter map to some economic construct?
> Personally, I find it valuable to consciously low-pass the ups and
> downs of daily experience -- it stands to reason that keeping
> one's equilibrium results in a more accurate perception of the
> underlying input waveform.
I agree. Related is "impact bias": prospectively, people anticipate
that future ups and downs will be longer in duration than they
actually turn out to be. (And greater in amplitude, although that's a
less startling observation.)
> On the other hand, if the output is slow enough, even PWM is good
> enough. We have a TIG inverter which I believe has its current
> controlled by PWM, and since the relevant time scale for even a tiny
> manual weld pool is on the same order as eighth- or sixteenth-notes,
> PDM would probably just introduce more switching losses without any
> improvement in the ultimate process.
That's a good point. If higher frequency is bad rather than good, PWM
is probably optimal.
Still, the switching losses might relate to e.g. the third and fifth
harmonic of the PWM waveform rather than just its derivative, in which
case you can use PDM-approximated PWM (along the lines of Don
Lancaster's "magic sinewaves" and the Apple ][ color waveforms we've
been talking about) to damp those out. I don't understand switching
losses well enough to know.
> There's a theoretical parallel to all of this as well: one can think of
> batch computation as being the PWM pattern: first we provide all the
> inputs, then get all the outputs, whereas interactive computation (or a
> unix pipeline composed of online components) is the PDM pattern: inputs
> and outputs interleaved smoothly over the course of execution.
Here's an analogy table:
Bresenham lines pipeline interactive HCI UI PWM/PDM
-- -- -- --
error counter computer memory usage human memory usage flicker
horizontal process A human off
vertical process B computer on
half a rectangle DOS temp file pipes batch processing PWM? not quite
jaggies context switching waste distraction waste switching losses
"Distraction waste" refers to Raskin's argument that you shouldn't
have to look at the screen in order to type and issue commands (you
don't want to context-switch on every character between input and
output), which was one of my guiding principles in my later
pen-input-method prototypes. Note that the size of pipe buffers being
traditionally several kilobytes (even on machines where each process
was at most 64kiB).
Another aspect of the batch-processing approaches is that it takes
longer to detect and correct errors.
PWM and PDM are really both very similar to the
interactive/pipeline/Bresenham approach. Can we make the analogy
One way to extend the analogy to drawing lines with PWM would be to
draw alternating horizontal and diagonal segments, such that the
horizontal size of the sum of the two was always constant --- 5 pixels
of diagonal and 11 pixels of horizontal for one line, perhaps 11
pixels of diagonal and 5 pixels of horizontal for another. When one
or the other of the segments happens to be 1, the line looks the same
as a Bresenham line.
The analogy you proposed is that the diagonal segment be vertical
rather than diagonal, with the proviso that the horizontal line (the
time dimension) must always be the same length. The trouble with that
is that you can make it arbitrarily good by setting the horizontal
line length to 1 --- you don't have the tradeoff between resolution
(number of distinct slopes available) and frequency (jagginess) that
> On a finer scale, one can look at the patterns of substitutions and
> reductions over the course of a computation -- it is theoretically
> possible to follow the PWM style, and do all of the substitutions,
> followed by all of the reductions, but the exponential blow ups
> involved mean that we usually compute more in the PDM style, and
> interleave substitutions with reductions.
What do you mean by "reductions"? Beta-reductions and eta-reductions
are substitutions, aren't they? Whether a particular lambda makes
things bigger or smaller when you beta-reduce it depends on how many
occurrences of its parameter it contains.
> Floyd-Steinberg and other error-diffusion dithers seem to be the
> equivalent for spatial, rather than temporal, smoothing. It makes
> sense that PWM would result in a more "posterized" LED pattern than
You're exactly right --- error-diffusion dithering and fixed dithering
are closely analogous to PDM vs. PWM.
I think I've heard someone refer to error-diffusion dithering as "FM"
and fixed dithering as "AM" --- the idea being that you modulate the
frequency with which individual black dots (resp. white dots) appear
in error-diffusion dithering, while in fixed dithering, you modulate
the amount of space each run of black or white dots consume. The
analogy is not perfect, since the frequency is highest in the middle
of the scale and drops off equally in the dark and light directions,
all the way to DC, and PWM isn't quite AM. Although you could
probably fool an AM radio with it.
> > I wonder what other SIMDish operations are enabled by
> > perfect-shuffle. The idea that perfect-shuffle is useful for doing
> > the Walsh transform isn't new, but I haven't seen it suggested in
> > the context of implementations on CPUs before.
> Entries in an ordered-dither matrix are generated by the
> perfect-shuffle of the coordinate values.
Hmm, that's interesting.
> Last month I ran across two independent sources which were wishing for
> such an instruction, and unfortunately can't remember them now.
Did you ever find them?
> (SIMD was definitely the context, though -- somewhat like a
> bit-level zip?)
It is indeed a bit-level zip.
> I think Knuth argues that as long as we're throwing silicon at
> things, might as well use 64-bit registers to do arbitrary 8x8
> boolean matrix multiplies, of which shuffles are special cases.
That's an interesting idea --- is it in MMIX? Last time I looked,
MMIX had a "shuffle bytes of register arbitrarily and OR them
together" instruction, with the idea that you could use it to
implement NUXI/XINU/whatever byte-swapping. I think the earliest
fascicle drafts I read were missing the OR part.
8-bit hardware perfect shuffles aren't all that interesting.
> > - the persistence-of-vision lettering is actually quite hard to see,
> > partly because it appears backwards half the time ... partly because
> > it appears in a different position on every stroke.
> How about trying to phase lock (or simply trigger) on a little
> ball-bearing switch? It might sound like shaking up spraypaint, but
> should be pretty cheap...
That's a good idea. Or I could just bang it on the wall at one end of
the trajectory, and use a piezo mike.
In "Designing Analog Chips", Hans Camenzind comments that phase-locked
loops should really be called frequency-locked loops, because the
phase isn't locked --- it changes as the frequency does.
Connecting back to earlier parts of this email, one of the parts of
Henry Massalin's Synthesis thesis I didn't understand at the time was
the "phase-locked-loop scheduling" stuff, for running Unix-style
pipelines of sound processing in a soft-real-time context. Maybe now
that I know what a PLL is, I should go back and reread it.
In case anyone else reading this is as confused as I was about PLLs,
the basic idea of the PLL is really simple: the frequency of your
reference signal is modulated by negative phase feedback. To get the
phase feedback, you multiply your reference signal (normally a +1 to
-1 square wave) by your input signal (e.g. the entire FM radio
spectrum) and integrate the result over some time period, say, ten
Any signal whose frequency is far enough from the reference signal
that it is uncorrelated with it over the integration time period will
just drop out. Signals that are very close to the reference signal in
frequency, but 90 degrees out of phase with it, will also drop out.
But if that signal gets, e.g., 80 degrees out of phase, then it
integrates to some positive number --- which speeds up or slows down
the reference signal to get back into phase with the detected signal.
By this means, once the PLL has "locked" onto the signal, its
frequency will change to follow the signal over quite a large range
--- but the phase is only exactly 90 degrees when the signal is at the
"natural" frequency of the reference signal, and at higher or lower
frequencies, the phase offset can be considerable.
>  in general, people (both for raw sense data and for emotions)
> are very insensitive at low frequencies, and as stampedes and riots (or
> their larger cousins, wars) show, overly sensitive at high frequencies.
> I wonder what the human equivalent of a gyrator would be?
>  a device which synthesizes an inductance from amplifiers and
> capacitances. How many differentiators does it take to build an
I don't know how gyrators work. You can't build a real integrator out
of differentiators, but if you can restrict your input to be periodic,
you can fake it with three differentiators.
>  welding is one of those cases where one can't distinguish between
> DC and AC simply by what frequencies occur in the spectrum. A pool of
> molten Al reacts differently depending upon which direction the current
> is flowing -- to the torch, or to the work -- and so even a pulsed
> square wave, which as far as a mathematician is concerned has plenty of
> high-frequency harmonics, and hence would seem to be AC, behaves like
> DC as far as the metal is concerned, unless some of the pulses actually
> have their polarity reversed.
This puzzled me for a while, but I have an old welding textbook from
Project Gutenberg that explains that arc welding must use DC --- AC
welding would be completely impractical, because the majority of the
energy of the spark is dissipated at the point where electrons hit
metal, i.e. the anode. I assume the arc welding of the time was TIG.
No doubt the authors of the book were startled when MIG and stick
welding became popular, which I guess was in part because they didn't
need such fancy power supplies.
>  in this model, one can compute (bresenham: move horizontally) or
> output (bresenham: move vertically). Would a transaction system, where
> one can output UNDOes as well as DOes, then be somewhat like a
> second-order delta-sigma?
I still don't understand second-order delta-sigma conversion, despite
many generous people's explanations, including the aforementioned Hans
Camenzind book; its 206th page (15-8) has the following reassuring
Most engineers find explanations of the sigma-delta converter
almost incomprehensible. There is a reason for this: terms are
used which are quite non-descriptive and often misleading.
Take a look at a conventional diagram for a first- order
delta-sigma ADC (Figure 15-11). This circuits has a "one-bit"
output, which is more a riddle than a description Figure 15-12
shows the same function with more familiar blocks, which makes
the circuit far easier to understand.
In fact, after skimming his description again, I'm no longer sure I
understand first-order delta-sigma conversion!
>  I was taught the CS view of neurons, in which after integrating a
> certain amount of input excitation, they produce an output spike, and
> reset the integrator. That sounds quite a bit like the inner loop of
> bresenham, which outputs a vertical and resets after integrating a
> certain amount of error. Is the biologists view of a neuron at all
It does. I wonder.
More information about the Kragen-discuss