| TIP: Click on subject to list as thread! | ANSI |
| echo: | |
|---|---|
| to: | |
| from: | |
| date: | |
| subject: | Re: A 21st Century Apple II? |
apple2freak{at}gmail.com wrote:
> On Mar 5, 9:11 am, "Michael J. Mahon" wrote:
>> apple2fr...{at}gmail.com wrote:
>>> On Mar 4, 6:18 pm, "Michael J. Mahon"
wrote:
>>>> apple2fr...{at}gmail.com wrote:
>>>>> On Mar 4, 2:23 am, "Michael J. Mahon"
wrote:
>>>>>> apple2fr...{at}gmail.com wrote:
>>>>>>> On Mar 3, 3:37 pm, "Michael J.
Mahon" wrote:
>>>>>>>> apple2fr...{at}gmail.com wrote:
>>>>>>>>> On Mar 2, 9:26 am, mwillegal
wrote:
>>>>>>>>>> On Mar 1, 8:57 pm, adric22
wrote:
>>> I can't imagine someone using the old tools even on a 4MHz Apple II
>>> being as productive as someone using a more modern cross-development
>>> environment on a PC. A modern editor (well, emacs isn't exactly
>>> modern, but...) combined with the near-instantaneous compilation of
>>> even very large (for the Apple II) programs would be responsible for a
>>> large part of this increased productivity I suspect.
>> Productivity in a resource-constrained environment has almost nothing
>> to do with the "efficiency" of the toolset. If it takes
me a week to
>> input and debug my routine, that's another week of careful thought
>> about the code and its behavior, and another 50 improvements in both
>> substance and style.
>>
>> I love doing this, so I'm in no hurry for it to end! The longer I take
>> to do something, the better the result and the greater the joy.
>>
> I see your point. If you spend the majority of your time working out
> the details of your design, then who cares if it takes 20 minutes to
> assemble/compile the code when you complete it.
>
>> That's how different a labor of love is from a job! ;-)
>>
> This is a key distinction.
>
> [...]
>
>> Speed is nice for some things, but when it causes programs to be
>> developed by "tweaking" things and recompiling instead of sitting
>> down with a pencil and figuring it out, it is a disservice to the
>> programmer.
>>
> If you're involved in a labor of love, and time is of little
> importance, then I mostly agree with you. OTOH, if you are working to
> a schedule, and need to produce some tangible results in less time
> than it takes you to develop a full understanding of the system you
> are working on, sometimes "tweaking" and recompiling is the only
> choice available to you.
>
> Tweaking is, I might add, also a perfectly valid way of determining
> how something functions. Contrary to Greek philosophy, it is not
> necessary to examine the nature of atoms in order to determine the
> nature of things built from atoms. It is also perfectly valid (as the
> Chinese discovered about 200 years before the Greeks) to determine the
> nature of things based on how they interact with themselves and other
> things. Intelligent "tweaking" involves exactly this principle, and
> is also central to the art (if I may call it that) of reverse
> engineering.
I agree, tweaking has its place--particularly in dealing with
complex systems. "Patch and test" is a common way to proceed.
However, testing a complex system is necessarily incomplete, and
the patch often has unintended and undiscovered consequences,
leading to more bugs (or job security, depending on point of view ;-).
>> In the days of batch processing, I would pore over memory dumps
>> until I understood every memory structure and table, often finding
>> serious bugs or efficiency issues that had not yet been manifested
>> in any other way. The result was that each "run" led to
the correction
>> of dozens of problems, and the program improved dramatically.
>>
> I prefer your technique when I'm dealing with code I've written
> myself, but the "tweaking" technique when I'm working with other
> people's code. Eventually, I develop a complete understanding of the
> code in either case, although each approach has advantages and
> disadvantages.
If the code is "understandable", then I prefer the "full
disclosure"
method, regardless of author. But I agree that this condition is
most often met for code I've written. ;-)
My most rapid learning occurs when reading code written by excellent
programmers--often in assembly language.
>> And I had immense satisfaction in achieving a *complete* understanding
>> of every aspect of the running code--even the ones that did not affect
>> the "function" of the program. When it was possible, I enjoyed
>> *listening* to the execution of the program on a detuned AM radio,
>> which gave me a good idea of the relative speeds of various parts of
>> the program!
>>
> Heh. I remember being able to hear the execution on the TV set
> connected to the computer with the volume turned up sufficiently high.
I found that I could easily distinguish the various processing loops
and sorts in the program from the tones and noise components (sorts
are high entropy, and so noisy). And whenever the (interactive
graphics) program hung in a loop, that fact was instantly recognizable
from the sound.
Machines are too fast to provide good audible cues these days, though
I've thought about ways to make their behavior more audible...
...
>> Surely, one of the primary attractions of working with old computers,
>> and particularly the Apple II, is an appreciation of the joy of being
>> on the bare, beautiful metal--just as a master woodworker would never
>> use a power tool when a simple manual tool provides a more immediate
>> experience of the texture of the wood.
>>
> As a systems engineer, I'd like to think that complexity (when
> necessary) is not inherently ugly. I'll grant you that because of
> human nature, unnecessary complexity is all too often employed, and in
> this case, it is rightfully called bloat.
Fair enough. I sometimes consider the function:complexity ratio as
a figure of merit.
> Regarding the Apple II -- it is a beautiful computer -- but the design
> was constrained not only by the principles of simplicity, elegance,
> and efficiency, but also by cost and the technological limitations of
> the time. If you remove the latter two limitations (well, at least
> the technological limitation anyway), and put yourself in Woz's shoes,
> what new works of beauty might you come up with?
That's an interesting question. I'd like to think it would head off
in the direction of parallelism--multiple copies of a simple unit that
could do wonders in concert. (The Propeller, or the AppleCrate, comes
to mind. ;-)
> [...]
>
>>> Given the large die sizes used in the old days, I imagine it should be
>>> possible (given access to suitable equipment) to slice open the chip
>>> and examine the die under a fairly low power microscope. This should
>>> enable reconstruction of an equivalent circuit. Hardly worth the
>>> effort, though, except perhaps for the challenge of doing it. That's
>>> assuming that Apple wouldn't be willing to provide details on the
>>> ASICs from their archives.
>> Right--complete understanding is possible *in principle*, it's just not
>> available *in fact*. ;-)
>>
> Or the cost/benefit ratio makes a complete understanding unattractive?
Well, for average folks, die surgery is not really an option...
> [...]
>
>>> OTOH, someone with a "development system" could
dynamically compile a
>>> particular "instance" of hardware they would like to
use, and then
>>> load it into their system. This would be analagous to having
>>> reconfigurable peripheral cards in a real Apple II except that you're
>>> entire system would be reconfigurable. If you got tired of the Apple
>>> II one day and decided you wanted to try out a TRS-80, you'd just have
>>> to create a new implementation and you'd have it.
>> I really do appreciate the concept of reconfigurability--and I also
>> appreciate that it is precisely *reconfigurability* that causes FPGAs
>> to be more intrinsically complex than dedicated logic.
>>
>> That doesn't bother me when my objective is reconfigurability--only when
>> it is the default method of implementing what is not reconfigured.
>>
> Hmm. But isn't the ability to reconfigure the device a huge asset,
> even when the intent of a project is not to create a reconfigurable
> device? It's awfully nice to be able to correct hardware bugs by
> using reconfigurable software just like it's awfully convenient to be
> able to fix software bugs by reloading a device's firmware.
It certainly is, but we succeeded in producing lots of sufficiently
error-free devices with non-reconfigurable parts.
Reconfigurability is a two-edged sword: it makes change easy and
so encourages creativity, but it makes change easy and so encourages
sloppy design.
I have a DVR with its software in flash memory. About once a month
(on average) it is automatically updated with new firmware. About
half the time new features (mostly junk) are added, and about half
the time, bugs introduced by the last update are corrected (I hope).
I don't regard this as significant progress, because the quality
control of the firmware has clearly been weakened as a direct result
of the increased ease of updating it.
I yearn for greater discipline...
>> I have often found that relaxing design constraints results in a worse
>> design, not a better one. Sometimes flexibility is the enemy!
>>
> Using an FPGA in a design does not represent a relaxation of design
> constraints, although it does represent an increase in flexibility. I
> suppose this flexibility may lead to sloppiness if a designer realizes
> he can correct any problems that come up by simply recompiling his HDL
> code, but that is human nature, and not inherent in the increased
> flexibility which is also a significant asset if the systems
> requirements may be subject to revision in the future.
I didn't say the problem wasn't human nature--I said that constraints
on resources usually improve the design quality by requiring higher
quality designers. ;-)
And using an FPGA does relax the design constraints, since it provides
lots of "spare" logic to support doing things poorly--just like lots of
memory naturally leads to software bloat by removing any reasonable
size constraint.
A good designer can still do beautiful designs without intrinsic
constraints, but the only thing making that happen is his own
discipline. And his boss will hate him for it if it takes
another day.
If the system resources are tightly constrained, then the extra
effort is necessary, and his boss regards him as a hero.
>> Don't get me wrong, I'm no Luddite (no offense, Simon ;-), but what I
>> personally enjoy most about engineering is the solution of difficult
>> problems with minimal physical resources and maximal human ingenuity.
>>
> I understand and appreciate your point of view. In fact, I even share
> it if you relax the definition of "minimal physical resources" to
> include contemporary technologies, rather than 25-year-old
> technologies. :)
Sure, no problem. But notice how "modern" tends to mean "so much
of everything that you'll never run out". Inefficiency has become
a principle of modern software design, and is on its way in hardware
design.
>> I know this is unpopular these days, though there may come a time (think
>> desert island ;-) when it will become more practical. One thing that I
>> am sure of is that I have enough versatility to appreciate both the
>> convenience of a Big Mac and the joy of a cake made "from
scratch"--
>> each in its time and place.
>>
> Agreed. I suspect we'd both give up the Big Mac before the cake "made
> from scratch" though...
;-)
>> Here, I've been putting forward the idea that the enjoyment of old
>> computer systems results at least partly from the ability to get away
>> from "up to date" industrial technology and revel in
what can be done
>> with simple technology employed cleverly and with great skill.
>>
> Again, I understand and appreciate this.
Yes, I think we understand each other pretty well, actually.
I certainly respect your approach and your project plans.
> [...]
>
>>>> Not only is it gone, it could not exist today. Today's
"kits" are
>>>> essentially mechanical assembly, since the circuitry is both pre-
>>>> printed and nano-sized. That kind of assembly teaches electronics
>>>> about as well as putting together an Ikea bookcase teaches furniture
>>>> construction. ;-)
>>> But it does exist today -- just on a much smaller scale. Check out
>>> www.ramseyelectronics.com for an example.
>> I'm familiar with Ramsey's kits--and they are perhaps the closest
>> surviving relative of the Heathkit. Ironically, many of their kits
>> use the same types of ICs and circuit boards as the Apple II--so I
>> see them as confirmation of my principle.
>>
> Ramsey's kits are much simpler than most of what Heathkit offered. I
> don't see any color TVs in the Ramsey catalog. Also, Ramsey doesn't
> even come close to the quality present in all of the Heathkits I ever
> built. Still, a number of their projects make liberal use of surface
> mount components which are smaller than a grain of rice.
Most of Heath's offerings were much less complex than their color TV.
I actually helped a friend build one of those in the mid-1950s. The
standard construction technique was point-to-point wiring between
vacuum tube sockets plus some terminal strips.
Most of my kits were Eico--big VTVM, signal generators, etc.--for
working on TVs and video circuits.
>> If you read electronics hobby magazines today (there are still a
>> couple), you will note that an increasing fraction of the projects
>> are based on programming a microcontroller--often to make it function
>> like a 555 timer and a couple of gates! I love microcontrollers, but
>> I also love 555s and gates, and would hate to see them languish.
>>
> Which one is more applicable depends a lot on your design
> constraints. With microcontrollers being very cheap now (not much
> more than a 555), they are increasingly being used in applications
> where less complex devices could be used. But let's not forget that
> designers appreciate their reconfigurability should the design
> constraints change.
Absolutely. And when they come in an 8-pin DIP package, and with
all that versatility, who can resist?
>> Several times in my life, I've found people writing complex (and often
>> incorrect) code to compute a moving average that could have been
>> computed in the analog domain with a single resistor and a capacitor!
>>
> Unless the circuit in question had a very low output impedance, I
> think you might have to add another resistor and an op amp to your
> circuit above. ;)
Ironically, the output was often a logic level, and the output of
the RC integrator went to a meter or galvanometer recorder, so no
buffering was needed.
This simple scheme produced a "processor idle" signal, while a simple
R-2R D/A on a process ID register produced a "running process" level.
This simple performance tool allowed us to home in on a big performance
problem in a day that had kept a team guessing (literally) for weeks!
>>>> Of course, there are the "100 experiments"
packaged products, but
>>>> they are all *very* introductory.
>>> Yes. Good maybe for children in middle school to play with.
>> Or grade school. By the time I was a freshman, I had built an
>> oscilloscope and was working on sweep circuits and video amplifiers
>> for photomultipliers. All of my parts were salvaged from trashed
>> radios and TVs and a few military surplus purchases.
>>
> Ahem. How many of your peers had accomplished similar things at that
> point of your life?
Apparently quite a few, though I didn't know them then!
What I worry about is what do those kids do today? Where is their
point of entry? How high is the barrier? Where will we be without
them?
I was building (vacuum tube) circuits in grade school, and soldering
up a storm! I somehow can't see that happening with surface-mount
parts...I hope I'm wrong.
I could take a radio apart with a screwdriver or nutdriver and see
exactly how the signals flowed and how to modify the local oscillator
to make it an 80-meter receiver. Good luck doing that with a single-
chip radio. Come to think of it, just opening an ultrasonically
welded plastic case has a tendency to be destructive. ;-)
>>>> The good news is that electronics can still be done at the SSI/MSI
>>>> level, where functions and connectivity are visible and hackable
>>>> with inexpensive tools.
>>> I see some nostalgia value in doing this, but I don't really think
>>> that a whole lot of practical knowledge would be gained that would
>>> have applicability in the world today.
>> Well, it would be a good foundation for someone who would then learn
>> about FPGAs. ;-) Or, perhaps, one could skip all that, the way that
>> engineers today skip vacuum tubes... (Of course, that leaves them
>> vulnerable to "mystical" ideas about tubes and their
problems. ;-)
>>
> Vacuum tubes are essentially obsolete, so there isn't much point to
> learning about them for most engineers. Someday, the same will be
> said about the 7400 and 4000 logic families, so it seems to me to be
> more worthwhile to teach the abstract (e.g. theoretical) concepts
> first, and then these may be applied to whatever technology the
> engineer happens to be working with at the time.
That's the usual approach to engineering education, so it must
be working. Of course, gates used to *be* the abstraction.
I recall in the 1970s, a visionary engineer at Burroughs wanted
to replace logic with just registers and ROMs. At the time, the
size and speed of the required ROMs didn't quite meet the need,
but he was, of course, correct.
Another fellow at HP Labs was thinking the same way and creating
"algorithmic state machines" for calculators--and Woz's disk
controller is a direct descendant of that concept.
PLAs were becoming a common "picoprogramming" tool, as in the 6502,
and machine design was moving toward the microprogrammed bit slice.
There has been a steady hardware progression toward orderly
structures of "memory-like" components, and I expect that will
continue.
> [...]
>> Absolutely. Using a marketable skill set to play with Apple II's is a
>> very understandable thing. It's just not my thing. ;-)
>>
> I understand that you enjoy working with the older technology.
>
> We both appreciate simplicity, elegance, and efficiency.
>
> I prefer to work with newer technology -- to me it represents greater
> unrealized potential.
Unquestionably.
> [...]
>
>> The same goes for global logic optimization. Though the search space
>> is usually much smaller, it's still an NP-complete problem.
>>
> We'll never know unless someone tries it. I suspect a modern computer
> could solve an NP-complete problem of the order of global logic
> optimization for an Apple II computer in a couple of seconds.
Never underestimate the growth rate of a factorial!
But I'd love to see the result. ;-)
Of course, part of the problem is just expressing the constraints:
"Map the addresses of a combined text/hi-res display so that both the
screen and the DRAM are refreshed completely at the correct rates
while wasting a minimal amount of DRAM."
"Create a video bit stream with alternating phase relative to a color
reference so that stable artifact colors can be generated using the
same bit patterns on consecutive lines."
These were the constraints that Woz mapped into gates--or rather
74xx packages, since chip count was his metric.
>>> What is important to me is being able to understand what the system is
>>> doing at an architectural level. I have at lot less interest in
>>> knowing what the electrons themselves are doing. I think this is the
>>> fundamental difference in our philosophy.
>> Or in what gives us the most satisfaction (after all, I'm a physicist).
>> ;-)
>>
> Now that explains a lot... ;)
;-)
>>>> It's a simple but regrettable fact that the resources
required for an
>>>> implementation expand to fill the resources that are
available, whether
>>>> it's bytes of memory, or transistors on a chip, or LUTs of an FPGA.
>>> I say we should blame the marketers. They are always pushing the
>>> engineers to produce more "functionality" in their
company's products
>>> in order to differentiate themselves from their competitors. How many
>>> of us use even 10% of the features that are built into MS Word today?
>>> Probably 90% of the useful functionality of a modern word processor is
>>> contained within Appleworks. They also constantly push the engineers
>>> to produce more functionality in less time, which leads to the huge
>>> number of layers upon layers of libraries upon which modern software
>>> systems are built but which are primarily responsible for the bloat
>>> which we both so despise.
>> I think that's an excellent description of the situation.
>>
>> I also think it's degenerate, and to be regretted.
>>
>> We always have ignorant customers, but we don't always have to
>> pander to them. After all, we are supposed to know what's worth
>> doing and what isn't.
>>
>> A "market-driven" company is a company without vision. A company
>> with vision drives the market, not the other way around.
>>
> I completely agree with you. Know of any companies with vision?
> Better still, companies with vision that are hiring? ;)
There used to be quite a few, then it was fewer, and now I don't
know for sure. HP was certainly that way for a long time.
I think Google is such a company today.
> [...]
>
>>> Nothing wrong with wanting to know what the electrons are up to.
>>> However, I'm more concerned with an architectural understanding, and I
>>> find if I focus on what the electrons are doing, my ability to
>>> understand complex architectures is more compromised than if I don't.
>>> I think the difference is simply in our choice of allocation of mental
>>> resources.
>> I understand completely. And I recommend "stretching
exercises" to
>> make it easier to think across more and more levels of abstraction.
>> That capability, more than any other, is the key to mastering system
>> design. Dividing a design up at the outset to limit communication
>> across levels is a sure way to get a suboptimal design--and often
>> *far* suboptimal.
>>
> What you say makes perfect sense when a project is simple enough to be
> handled by a single designer.
>
> Otherwise, the design should be layered with well-defined and limited
> interfaces defined at each layer so that each layer may be handled by
> a separate designer or team of designers with limited knowledge of
> what goes on in other layers. For any project of decent size, an
> approach such as this to limit the exposure of the complexity within
> each layer of the implementation is necessary in order for the project
> to be able to be completed by human beings in finite time.
In the late 1960s at Burroughs, there was a principle of software
projects which said that if a project cannot be done by three or
fewer people, it should not be done.
That was a pragmatic principle which limited complexity and enforced
a certain level of design coherence--purity, if you will.
It is a good principle. Consider Unix or C or C++, or even FORTRAN in
its early years. Consider what became Microsoft Word, or Excel, or
BASIC, or Visicalc, or dBase II--all the result of one or two designers.
Then consider what happened to these works of art as they became
"marketized" and industrialized. Though what happened may have been
inevitable, I'm quite sure it was not necessary.
Some parts of a problem break into loosely coupled modules more easily
than others. The math library can be developed relatively independently
of the compiler and linker, though only if the *architected* interfaces
are pre-defined and strong--and just versatile enough to allow for
reasonable evolution.
The problem with "divide and conquer" occurs when the problem either
has poorly defined "cleavage planes" or when any subdivision of the
problem leaves strong connectivity between the parts.
To arrive at a good design when connectivity is strong, a single
designer or a small, coherent team must direct all major tradeoffs to
ensure consistency and coherence of the system.
This is where most large projects fail. The initial division of the
problem is only apparently correct, and strong interdependencies among
the "separate" parts remain. Inconsistent decisions and priorities
are chosen by the various teams and the resulting problems surface
during integration, where politics and pragmatics complicate their
resolutions.
Even after the functional bugs are patched, there remain fundamental
performance and interdependency issues that become "fossilized" into
the system structure, sealing its fate.
> Possibly a system such as the Apple II series represents a level of
> complexity that is not far from the upper bound of what a talented
> single designer is capable of.
Perhaps, though there are many other less well known examples.
>> I watched Intel do that once with a microprocessor design, and they
>> wound up using 5x as many people for twice as long to get half the
>> performance in the same silicon process! Design methodology matters.
>>
> I agree with you that design methodology matters. I'm not sure
> whether we agree that for projects of any decent size, a design
> methodology that limits the exposure of complexity of the individual
> pieces is necessary for the project to be realizable in finite time,
> however.
Certainly, abstraction is necessary. The problem is with the
uneven quality and independence of the abstractions.
>> I've often said that if floors were transparent, there would be no
>> skyscrapers! But that applies to "users", not to the
designers and
>> builders of skyscrapers, who must always think in three dimensions
>> to get correct answers.
>>
> For a skyscraper where each floor is a clone of the one above or below
> it, this makes sense. However, for a skyscraper in which each floor
> is completely different from the one above or below it, I don't agree.
It doesn't have to be a clone--just consider plumbing, HVAC, wiring,
etc. Many infrastructure elements are common even when the "floors"
are otherwise unique.
The design abstraction analog would be the passing of data from one
level to another, where the "meaning" of the data changes with the
abstraction level, but the "content" is essentially unchanged.
For example, a text string is a text string, whether it's a window
title or a filename. It may have an internal syntax, but it's still
a string. Now, how many times will it be converted in form from, say,
ASCII to Unicode to ASCII to Unicode before it gets used for something?
Or how many times will it be copied? Or how many times will it pass
from one protection domain to another, or one machine's cache to
another? These are all cases of wasted work, caused by confusing a
change in abatraction with a change in representation--which careful
design can avoid.
This is only an example, and a relatively benign one at that, though
it can result in millions of excess instructions being executed each
time a string is passed. How often does this happen with large
structures, like bitmaps? How often are non-local pointer chains
traversed, causing myriads of cache misses, each one costing as much
as hundreds of instructions?
> By way of example, let's take the human body. If you lived to an
> average age of 80, and began studying it at age 18, you wouldn't be
> able to get through more than a small fraction of the total knowledge
> that is available. And by the time you were 80, you would have
> forgotten most of what you learned when you were younger anyway, plus
> a lot of the information you learned would become obsolete.
>
> Yet the human body represents a triumph of design, at least in the
> ways we've been talking about -- well, elegance and efficiency
> anyway. One may argue that it is a simple as it can be in order to do
> what it does as well.
Actually, it has numerous well known "design errors" that are wreaking
havoc today. It was never selected to live longer than 40-50 years,
so anyone older than that is out of warranty. ;-) The spine design
is very poorly adapted to erect posture. ;-)
But I get your point--inevitably, systems evolve to levels of
complexity that are beyond comprehensive understanding by any
one individual.
I wonder how we'll feel when we--actually "it"--constructs a
machine with a capacity for "understanding" exceeding our own?
> Someday, human beings may attempt to design something similarly
> complex. Yet none of the designers could possibly understand more
> than a tiny fraction of the whole. If I take your argument to its
> logical conclusion, I'd say this means that human beings are limited
> to designing efficient systems of limited complexity -- many orders of
> magnitude smaller than that which the human body represents. But I
> think that with the right tools and design methodologies it may
> someday be possible to do exactly this.
Agreed.
>> We have to be able to think about all the levels of abstraction as
>> close to simultaneously as we can. That's how system-appropriate
>> tradeoffs are made and cross-level system designs are optimized.
>>
> As a systems engineer, I appreciate what you are saying here.
> However, I do not need to understand each layer of a system to nearly
> the same level of detail as those who are responsible for the
> implementation of the individual layers.
It is not necessary, but it is always helpful, and frequently
shocking to discover that the problem you are striving to solve
at "your" level is being created by someone to solve a much easier
problem at "their" level--or vice versa!
The assumption of decoupling between levels is a goal that is
seldom achieved in practice--partly because all levels are drawing
on the same resource base.
>>> I understand. Woz did indeed design such a system -- for the 1980s.
>>> Where is the equivalent system for the 1990s or the 2000s?
>> That is an excellent question. I suppose we could stipulate that
>> as the field moves forward, so does the "entry point",
and gate-level
>> design is now as dated as carburetor adjustment. But to me it seems
>> that something is lost when logic is just "hardware programming".
>>
> Something is lost, and something is gained. Just like we lost our
> tails when we came down out of the trees (if you subscribe to Darwin),
> we gained in other areas (opposable thumbs).
There simply is no other rational explanation for all the facts at
hand than Darwin's theory, and it is as marvelously successful and
productive as Newton's theory of gravity (as amended by Einstein).
...
> Is there another angle from which to view computing? One that would
> empower a vision of what computing could be if "the spirit of Woz" was
> alive today? Or is there simply no point to this line of thought and
> we should just enjoy the old systems for what they are alongside the
> new systems for what they are and never the twain shall mix?
I don't know in general. For me, modern systems affect older
systems only by supporting communication and distribution.
And older systems affect modern systems only through inspiration
of individual designers.
> [...]
>
>>>> Of course. I have very good control over my attitude!
>>> Tell me, what is your secret? Zen meditation? Or is it those
>>> marvelous microbrews you have in your neck of the woods? ;)
>> A lifetime of learning to deal with upsets and setbacks, and the
>> experience of making them "go away" by looking directly at what
>> I'm experiencing, which invariably shifts my point of view.
>>
> Ahh, so you're a Zen Buddhist then... ;)
Fair enough. They're not hampered by belief, they just experience
what works and what doesn't.
>> I don't always succeed fast enough to avoid some suffering, but that,
>> too, serves the valuable purpose of reinforcement--"It still hurts
>> when I do that!". ;-)
>>
> Yes, pain reminds us that we are still alive.
>
>> I think we've reached some sort of closure! ;-)
>>
> It has been a pleasure!
It has! Thank you!
-michael
******** Note new website URL ********
NadaNet and AppleCrate II for Apple II parallel computing!
Home page: http://home.comcast.net/~mjmahon/
"The wastebasket is our most important design
tool--and it's seriously underused."
--- SBBSecho 2.12-Win32
* Origin: Derby City Gateway (1:2320/0)SEEN-BY: 10/1 3 34/999 120/228 123/500 128/2 140/1 222/2 226/0 236/150 249/303 SEEN-BY: 250/306 261/20 38 100 1404 1406 1410 1418 266/1413 280/1027 320/119 SEEN-BY: 393/11 396/45 633/260 267 712/848 800/432 801/161 189 2222/700 SEEN-BY: 2320/100 105 200 2905/0 @PATH: 2320/0 100 261/38 633/260 267 |
|
| SOURCE: echomail via fidonet.ozzmosis.com | |
Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.