TIP: Click on subject to list as thread! ANSI
echo: c_echo
to: All
from: Bob Stout
date: 2004-04-29 15:28:06
subject: Re: [C] Re: Extreme Programming

From: Bob Stout 

Well, let's see if I can reply to this today without a power glitch
rebooting my computer as it did yesterday... :-(

Quoting "Bruce D. Wedding" :

> First off, it isn't a corporate or management strategy.  It was invented by
> programmers for programmers.

Well, Bruce, there are programmers and there are programmers. After doing
some further research, the thing that jumped out at me regarding XP is that
it was developed by and for programmers heavily into object oriented
programming. The things that might appear great to an O-O shop are probably
going to be counter- productive in another software development
environment.

Since this is a C conference, my comments reflect common usage among C
programmers. Although C can implement pseudo O-O practices, it's inherently
a procedural language. Also, although C is the preferred language for many
systems-level projects, it's been almost a decade since it was the language
of choice for most applications programming.

These realities don't reflect on either the language or the programmers.
I'll readily admit to poor O-O programming skills, just as many
applications programmers would find it difficult to write real-time
embedded code. So, the bottom line is that it was invented by O-O
programmers for O-O programmers. Any claims beyond that overlook a critical
distinction.

> > > 1.  Planning - Prioritize the next release by combining business
> > > priorities with technical estimates.  Basically, perform the
changes in
> > > priority order.
> >
> > This restates the obvious.
>
> Bob, I doubt you would be surprised if I told you that I seldom see this
> happen in the real world.  Come to think of it, I've NEVER seen it happen in
> the real world.  We develop a specification and begin coding and at that
> point, everything is at the same priority.  Its in the spec so it MUST be
> completed.

The obvious answer to this is that you're offloading the burden of poor
specification to the programming staff. But again, this merely reflects a
difference in mindset. In a typical applications (read O-O) shop, the spec,
such as it is, will often include numerous bells and whistles of varying
priorities. In my world, the spec defines the totality of the required
feature set. If any are omitted, you don't have a product. You can leave an
attractive convenience feature off of a word processors and still have a
word processor. OTOH, you can't leave firemen's service off of an elevator
control system, knock detection off of an engine management system, or
physical error detection and recovery off of a device driver.

A couple of these also touch on another key difference - in a lot of the
work in my world, the spec requirements are set by laws and/or standards
committees. They share the view that if everything they say isn't there,
you don't have a product. There are also legal considerations - in my
world, leaving some behavior poorly defined or leaving off a feature can
result in significant loss of life or property. If your spell checker
doesn't work exactly right, no one will likely die as a result.

> > > 3.  Testing - Continuous unit testing.  Tests are written as code is
> > > written.  Customers write and run feature tests.  An integration test
> > > must be run at each build.
> >
> > Again, this *should* be restating the obvious, but frequently it's not
> > done.
>
> And I've never seen it done to the extent that Extreme programming suggests.
> When writing in C++, every class has test methods that test the class
> functionality and must be updated anytime a method is added to the class.
> All class tests are invoked by one call to a base test class.  Examples are:
> http://www.xprogramming.com/testfram.htm

In most applications (O-O) shops, testing is as you say - simply a matter
of adding a test method. In my world, there are typically three levels of
testing: software only, hardware only, and mixed. When writing ISR's or
other low-level OS code, testing often involves logic analyzers, in-circuit
emulators, oscilloscopes, etc. You can sometimes add code to assist, but
even then, your test code can seriously screw up system timing. IOW, in my
world, this restates obvious goals, but may be messy, at best, to
implement.

> > > 4.  Refactoring - Continuous refactoring.  Simplify, remove
duplication,
> > > add flexibility.
> >
> > Improving algorithms is the most reliable way to enhance software quality.
> > However, if there's more than one programmer working on it, coordination
> > and communication become major issues.
>
> "Improving algorithms" is a broad statement so I'm not
exactly sure what you
> mean.

The old programmer's maxim is that the only way to achieve real code
improvement is to improved the algorithm rather than the implementation.
"Refactoring" is merely a new buzzword to encompass what library
writers have been doing for years - generalize, restructure, optimize,
iterate. Opportunities for algorithmic improvement typically come as part
of the restructure step.

On the one hand, I find it mildly annoying that people keep coming up with
new buzzwords to describe extant practices. On the other hand, it's nice to
be able to add the buzzwords to my resume with experience dating back to
before the buzzword was coined. ;-)

> > > 5.  Pair Programming
> >
> > Bull sh*t!!! I won't/can't work productively either with someone looking
> > over my shoulder or holding someone else's hand. I've actually quit jobs
> > rather than be stuck with this sort of arrangement.
>
> LOL!  I don't like it either Bob.  Whether I'm in charge or not, it is no
> fun and difficult to remain focuses if you don't have the keyboard.  I
> actually get headaches from trying to follow code while some bozo is paging
> up and down.  I hate it.

"Good fences make good neighbors."

> > Collective ownership = collective responsibility = no responsibility or
> > accountability. This could only work if everyone had the same level of
> > knowledge and experience. Otherwise, you will have well-meaning but
> > clueless tyros "improving" critical code.
>
> They will tell you that this will be caught by the pair programming.  I also
> do not equate collective responsibility with no responsibility.

I do, and I have some experience with this. This is the software
development equivalent of a blog. Every one contributes, good or bad. IOW,
it's a lot like the SNIPPETS collection. Still, at the end of the day,
everyone looks to some individual to make it all work together. In the case
of SNIPPETS, it's me. In the case of an XP project, it's whoever is
designated to do candidate build and unit testing. As Jon recently pointed
out, "blame" can be a useful feedback mechanism, but XP robs it
of any power, since whatever blame might be assigned now points to a pair
of programmers. The issue of ownership and accountability can therefore
never be resolved.

> The code has to pass the unit and integration tests before being accepted
> into the master source.  If it doesn't, you know that you broke it and have
> to fix it, hence learning the code.  If you have something
"critical" then
> certainly your tests will ensure that it operates as it should, meeting
> schedules, calculating properly, etc.  If the tests are properly written,
> then broken code can't be integrated.

In this, you repeatedly use the pronoun "you" to refer to a
programming pair. Since formal English has no way to differentiate between
the second person singular and plural pronouns, this is ambiguous, but I
believe the real error is that you're assuming individual responsibility
where the tenets of XP dictate shared responsibility.

> > > 7.  Continuous Integrations- integrate and build many times a day.
> > I'm sure this creates the illusion of productivity among those who can't
> > tell the difference between movement and progress.
>
> I think you miss the purpose.  The purpose is that you build after each and
> every change so that it is patently obvious through integration testing, what
> is broken and what works.  The benefits are seen in decreased debugging
> time.

If you can produce release candidate code "many times a day",
you're writing trivial code. In any systems-level project I ever worked,
the changes that could be made in only a few minutes or hours (necessary to
your time frame) typically worked without modification. Significant changes
take amounts of time measured in at least half-day increments.

Also, this gets back to the testing issue. Are you talking about a full
regression test? Probably not since those typically take many hours even
with software ATE tools. When I was working with Motorola Metrowerks, it
took at least 4 hours to performs a fully-automated regression test of just
the C compiler by itself - never mind the IDE, debugger, etc. And that was
after I got
there and wrote the software ATE test suite. Before I arrived, they only
tested once per week since the tests required all weekend to run.

The type of software also factors into the question... Let's say you write
it and test it and something's wrong. In a hosted application, your
software ATE tools (assuming you have them) may be able to provide detailed
reports. In my world, if it breaks, there's all too often little you can do
except drag out the hardware and start hooking up probes to the PC board.

> > One thing none of your description addresses is how well the scheme
> > supports CMM. Forget ISO-9000, CMM is the only valid metric for companies
> > trying to insure software quality.
>
> I don't see why is it the only valid metric?  We produce FDA approved
> medical device software and I'm the only one there that even knows what the
> CMM is.

Apples and oranges... I've worked through FDA approvals on several products
(pacemakers and insulin pumps mostly), and all they care about is
functionality and reliability in the trials. Whether you're CMM level 1 or
level 5 is not their problem. CMM is only an issue for your company's cost
of software development and maintenance.

> The processes defined by CMM at various levels existed before the CMM
> existed, it was defined by them, not the other way around.

The same as XP - what's your point? ;-)

> In any event, here is an article addressing the very issue if you care to
> read: http://www.xprogramming.com/xpmag/xp_and_cmm.htm

This article tends to buttress other comments I've read from those in the
XP movement, namely that CMM and XP may share some goals and methodologies,
but that XP, in toto, is not a CMM methodology.

> > Any scheme which doesn't clearly
> > address responsibility and accountability will have problems in a CMM
> > enhancement environment. From my perspective, your description has lots of
> > room for denied responsibility and finger pointing when things go wrong
> > (and, sooner or later, things *will* go wrong!)
>
> Bob, the goal is the creation of quality software.  There is only one
> output, the software, and if it doesn't work, the TEAM is responsible and
> accountable.

Sink or swim together - how egalitarian! Personally, and speaking from both
my programmer and management experience, I'd much rather identify the weak
member and cut him/her loose if remedial action fails. XP (as well as some
other team-centric approaches) only helps maintain the anonymity of the
incompetents.

> But, the real point is.  None of that gets you any closer to quality
> software.  If the code is broken, it is irrelevant who broke it.  What is
> relevant is the root cause and how do WE fix it?

Who broke it is often less important than how it became broken. No one
deliberately writes broken code. When it breaks, it's most often a side
effect of fragility born of poor design. And this is the weakness of all
team approaches - a lack of consistent strategic vision. There are many
more tactical than strategic thinkers in the world, yet in a team, everyone
participates equally. I've witnessed literally dozens of instances where a
clear strategic vision was corrupted by petty bickering over tactical
non-issues. Anyone who wants to witness this first hand should try sitting
on an ANSI standards committee sometime.

So, you're both right and wrong here. You're right in that knowing who let
the error in may or may not be useful information. Knowing when and how it
crept in is vital, however. The only useful information you get from
knowing who let it in is in the possibility of remedial action. This gets
back to Jon's "blame as a feedback mechanism" argument.

> > It might be OK in a typical desktop environment where everyone is using
> > CASE or RAD tools. But when you get to a situation where real programmers
> > are writing real programs rather than letting a machine patch together
> > boiler plate, you need a responsible architect and a coherent design
> > strategy.
>
> Ignoring the obvious slight to application programmers,  I respectfully
> disagree.

That *was* snotty - I apologize.

> I've spent the last 5 years working on medical device software that uses
> VxWorks RTOS and no CASE or RAD tools and I think some of these concepts
> would have been of great benefit.

Ah, so you're a real programmer after all! ;-)

> I will also note that none of what I posted addressed the design which seems
> to be what you're taking issue with, other that the part about prioritizing.
> XP does address design issues but I was focusing more on.  You seem to infer
> that a large multi-programmer project starts with 12 guys at 6 computers with
> 6 blank source files.  I doubt that is the intent.

In my world, it's several guys, each with a single computer, but all with
template source files representing the coding standards and almost nothing
else.
I almost always work in a white paper environment. I may wind up reusing
code, but typically, I start with nothing but a requirements document (if
that much!) Also, typically in my world, nothing in the requirements
document is optional, so the only prioritizing is a side effect of the
development methodology (I do top-down design and bottom-up coding).

> > I've been there. The whole "pair programming" thing sounds like a
> > corporate strategy to hire lots entry level programmers and force them to
> > interact, hoping they'll make up in epiphanies what they lack in
> > experience and depth of knowledge.
>
> Recalling that I'm generally opposed to pair programming, I believe the
> intent is to disseminate information from the experienced to the less so as
> well as spread the knowledge.  The obvious problem with your solution of
> having the guru do all the "critical" code is that the guru
leaves one day
> and you're up a creek.  BTDT and had to pick up the pieces after he left.

The term "guru" encompasses all that's usually wrong with that
scenario. Everyone on a project should be cross-trained, but not all will
be equally competent. Information hoarding should be a capital offence.
OTOH, my current project wouldn't be where it is if the company hadn't
brought me in to rewrite a
complete product from scratch. Unlike me, the other guys on the project had
no confidence that my radically new (their perspective, not mine, since I
knew it would work) design would either work at all or provide any
benefits. If it had been a democracy, we'd still be setting on a pile of
crappy code. However, in this case, the results of their previous efforts
had discredited them to the point where TPTB were willing to give me full
authority to dictate the design of the new code.

In my previous life as a manager, this is how I worked as well. I hired
good people (like Jon) and pretty much stayed out of their way. I always
let the most capable person design the code, but made sure that everyone on
that team had full disclosure, just in case it was needed.
"Gurus" are invaluable, but should never be irreplaceable. The
design was shared and discussion encouraged, but at the end of the day, the
designer's final word was law.

> > Except for the restatements of the obvious ("breathing is
good"), I'm
> > generally underwhelmed. From what I've seen, most management fads have
> > their basis in companies trying to minimize payroll costs without
> > sacrificing quality or functionality. That's not necessarily a bad goal,
> > but the fact that each year brings new fads tells us that each one has
> > flaws. The obvious solution - to recruit the best staff possible, give
> > them room to be creative, and retain them by treating them fairly - flies
> > in the face of conventional business wisdom today.
>
> I find our differing viewpoints quite interesting.  I didn't see this as a
> management fad.  Perhaps I'm naive but I saw it as an attempt to improve the
> pitiful software development situation present.

The problem of poor software quality has been around for as long as there's
been software. The driving force behind change has been the management of
the companies doing software development. The changes themselves have come
from computer scientists who were paid (either directly or via university
grants) by the companies wanting the changes.

The bottom line is quality, but when you look for quality, you're looking
from a different perspective than a Fortune 500 software development
manager. You're trying to achieve perfection, he's trying to achieve lower
unit costs.

One of the more memorable speeches I heard once was at one of the Software
Development conferences back in the early 90's. The fellow speaking was the
manager of the MSC development team. He had lots of interesting things to
say, but the most significant from a management theory/sociology POV was
his statement that you should look for your best people and get rid of
them! The justification was that you'd be pushing them into a more
rewarding job ("It's for your own good.") and (naturally) that
the staff would become to a continuously flowing queue of people, filled at
one end with those fresh out of school (and with a concomitantly low median
payroll cost) and drained at the other end with your most accomplished
programmers.

Having spent a lot of time in middle-level software management, I can
attest the pressure is *huge* to slash payroll and replace expensive hands
with those fresh from school. New technologies and methodologies were
pounced upon as ways of reducing the skill requirements level of the median
programmer. The same is true of management fads. Each has been an attempt
to use some set of no-brainer rules to replace entire levels of management.
Just as it's cost-efficient to hire programmers with little or no real
experience, it's also cost efficient to hire managers with no
qualifications in the fields they manage.

> We've all seen the numbers regarding the number of defects in released
> software, the percentage of software projects that are late or never even
> finished.  I'm not saying this is the solution to all of that, just an
> attempt.  I've never practiced this stuff, just read a book or two.  I think
> there are things to learn from it and things to leave behind.

You can always learn from almost anything, but discernment is required. However,
for the work I do, I see very little new here, and what is new I consider
questionable. As always, YMMV...

-------------------------------------------------------------
Consulting: http://www.MicroFirm.biz/ Web graphics development:
http://Image-Magicians.com/ Software archives:
http://snippets.snippets.org/
  c.snippets.org/   cpp.snippets.org/      java.snippets.org/
  d.snippets.org/   python.snippets.org/   perl.snippets.org/
  dos.snippets.org/ embedded.snippets.org/ apps.snippets.org/
Audio and loudspeaker design:
  http://LDSG.snippets.org/   http://www.diyspeakers.net/


----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

--- BBBS/LiI v4.01 Flag-5
* Origin: Prism's_Point (1:261/38.1)
SEEN-BY: 633/267 270
@PATH: 261/38 123/500 106/2000 633/267

SOURCE: echomail via fidonet.ozzmosis.com

Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.