| TIP: Click on subject to list as thread! | ANSI |
| echo: | |
|---|---|
| to: | |
| from: | |
| date: | |
| subject: | Re: Characterizing comple |
Guy Hoelzer wrote or quoted:
> in article cfas1m$22d8$1{at}darwin.ediacara.org, Tim Tyler at tim{at}tt1lock.org
> > Guy Hoelzer wrote or quoted:
> >> Tim Tyler at tim{at}tt1lock.org wrote on 8/8/04 3:22 PM:
> >>> Guy Hoelzer wrote or quoted:
> >>>> I disagree with your interpretation of Shannon
information as being about
> >>>> how surprising the data are. That is what statistics
are for. There is no
> >>>> indication of rarity or inconsistency with prior
beliefs in Shannon's
> >>>> information measure. It is, instead, a summary of
data variance
> >>>> that places an upper limit on the amount of physical
information
> >>>> that might be represented by the data.
> >>>
> >>> Shannon's information *is* conventionally a measure of
its "surprise value".
> >>>
> >>> E.g.:
> >>>
> >>> ``The intuitive feeling that the greatness of an equation
reflects the
> >>> element of surprise receives a kind of confirmation in Shannon?s
> >>> equation I = ?plog2 p, which forms the basis of
information theory.
> >>> Igor Aleksander describes the message it conveys as follows:
> >>>
> >>> ...[T]he amount of information I depends on the surprise that the
> >>> message holds. This is because the mathematical way of expressing
> >>> surprise is as a probability p; the less probable an
event is, the
> >>> more surprising it is and the more information it conveys.''
> >>>
> >>>
http://americanscientist.org/template/BookReviewTypeDetail/assetid/21209
> >>> [I hope this long URL survived the s.b.e URL mangler!]
> >>>
> >>> ``Shannon sees things a little differently. He defines
information as a
> >>> measure of uncertainty as to the next message to be
received in the
> >>> communication or messaging event. The higher the uncertainty or
> >>> surprise, the greater the information, and the greater
the entropy.
> >>>
> >>> - http://www.mgtaylor.com/mgtaylor/jotm/winter97/infotheory.htm
> >>>
> >>> Inconsistency with prior knowledge is *exactly* what Shannon's
> >>> information metric is all about.
> >>>
> >>> Information is *totally* subjective in Shannon's model. A message
> >>> that one agent finds informative may contain no
information if is is
> >>> encountered by another agent.
> >>>
> >>> Shannon's information is most certainly *not* an upper
limit on the amount
> >>> of physical information that might be represented by some
specified data.
> >>>
> >>> That might be termed "theoretical information
capacity" or "data storage
> >>> size".
> >>
> >> Well, one of us should be able to learn something here. My
understanding of
> >> Shannon information is clearly at odds with yours, and with the (IMHO
> >> misleading) quotes you provided. If you are correct that Shannon
> >> information has anything whatsoever to do with subjective
interpretation or
> >> the "surprise" factor, then you ought to be able to
show me how the
> >> observer's viewpoint is represented in Shannon's equation.
It appears to be
> >> utterly absent to me, and thus would play no role at all in
the meaning of
> >> Shannon information.
> >
> > The subjective element is in the probability.
>
> Ah yes -- the "p" has been interpreted both as a probability and as a
> proportion. I was assuming the latter, although you are correct that in
> more formal descriptions it is usually considered to be a probability. I
> can see how that introduces the "surprise" factor. I stand
corrected on the
> metaphor historically applied to Shannon's equations, although I also stand
> firm on my claim that the classic equation H=-pi(log pi) is often applied as
> a static and objective measure of information content. I will also give in,
> however, on the fact that the number of kinds of parts (e.g., letters in an
> alphabet) can often be a matter of subjective interpretation. On the other
> hand, measuring the proportions of each type in data ought to be an
> objective exercise.
Under circumstances where agents have the same information about
the stream being measured then they will argee about its information
content.
For instance if they were shown a number of red and blue balls put
into a bag and then drawn out, then they will make the same estimates
of the probabilities involved - and of the information they got from
seeing the results - assuming their memories of what balls have
previously been taken from the bag are also equally good.
However, agents having the same information to go on is not really
the most general case.
> > For example, an agent with a complete understanding of the source
> > would be able to identify each symbol with a high degree of
> > certainty. His estimate of the probability of each symbol arising
> > will vary at each step - and will be close to either 1.0 or 0.0 for
> > each symbol. Witnessing the stream will give this agent little or
> > no information.
> >
> > A different agent without such knowledge might make very different
> > estimates of the probabilities - and witnessing the stream will
> > provide that agent with much more information.
>
> Again, I acknowledge that this is the standard Bell labs metaphor, and the
> reason Shannon developed his model. However, I am less compelled by the
> metaphor than I am by the equations. Using a more common place definition
> of information I could argue that the prepared agent will be able to better
> understand the message, while the less well prepared agent could train
> itself on the message to learn the language. [...]
Maybe - given time and more exposure to the messages.
However even well-prepared agents can often brush up on their skills.
For example, agents can sometimes bribe the message sender - and receive
transcripts of the messages in advance of them being sent. Is that
allowed?
Where do you stop with the process of arguing that the message recipients
could be better educated about the nature of the messages being sent?
> I still argue that in that context the measure of Shannon
> information establishes an objective maximum information
> content for a data set, independent of an observer's viewpoint.
Shannon information being considered to be independent of the agent
measuring it seems like a confusing idea.
I'm happy to allow that media have "maximum information" capacitites,
though - the largest volume of information they could be used to
transmit. If that's what you are referring to that's fine.
However - if you are saying that the information content of
a particular message is ever somehow objective - I suspect
that is likely to be misleading.
Best perhaps to state that the message transmits so many bits
of information - on the basis of some specified probabilty model
of symbols occurring.
--
__________
|im |yler http://timtyler.org/ tim{at}tt1lock.org Remove lock to reply.
---
þ RIMEGate(tm)/RGXPost V1.14 at BBSWORLD * Info{at}bbsworld.com
---
* RIMEGate(tm)V10.2áÿ* RelayNet(tm) NNTP Gateway * MoonDog BBS
* RgateImp.MoonDog.BBS at 8/14/04 5:27:43 PM
* Origin: MoonDog BBS, Brooklyn,NY, 718 692-2498, 1:278/230 (1:278/230)SEEN-BY: 633/267 270 @PATH: 278/230 10/345 106/1 2000 633/267 |
|
| SOURCE: echomail via fidonet.ozzmosis.com | |
Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.