Talk:Sigcomm 2012: Difference between revisions

From John C. Doyle
Jump to navigation Jump to search
No edit summary
Line 42: Line 42:


There are two obvious points to make here relevant to Davrolis, and one for the Shenker direction (and presumably many more, but these are obvious ones).
There are two obvious points to make here relevant to Davrolis, and one for the Shenker direction (and presumably many more, but these are obvious ones).
 
# One is that biological evolution is not like his model, though the mainstream cartoon version is (this is a great example of being misled by biomemetics, but we shouldn't be too harsh here as the G&H and Shapiro versions are very very advanced views of evolution).  Both G&H and Shapiro say it is in fact extremely different than the cartoon, and though the mechanisms are well known (HGE and other forms of NGE), the "big picture" consequences are not, and the role of layered architectures is just in its beginning (so there are many papers for biologists needed on this...).  But for us, the point is that we at least want to think about the real version of evolution and not the cartoon.  The cartoon is now primarily a rhetorical and pedagogical oversimplification with little relevance to reality (and most biologists really don't care, they are too worried about their molecule of choice).
One is that biological evolution is not like his model, though the mainstream cartoon version is (this is a great example of being misled by biomemetics, but we shouldn't be too harsh here as the G&H and Shapiro versions are very very advanced views of evolution).  Both G&H and Shapiro say it is in fact extremely different than the cartoon, and though the mechanisms are well known (HGE and other forms of NGE), the "big picture" consequences are not, and the role of layered architectures is just in its beginning (so there are many papers for biologists needed on this...).  But for us, the point is that we at least want to think about the real version of evolution and not the cartoon.  The cartoon is now primarily a rhetorical and pedagogical oversimplification with little relevance to reality (and most biologists really don't care, they are too worried about their molecule of choice).
# The other is that both evolution (however clever it is) and intelligent design (only done by humans) can be grotesquely myopic.  Hemorrhagic fevers and aggressive cancers destroy their hosts so quickly as to be essentially suicidal for the parasite/tumor, and do so in ways that are horrific for the host. (If you believe this is the result of supernatural ID, then the designer is truly evil.) Similarly medicine for 2000 years was horrific for the patient.  Really explaining the essential role that architecture (and its hijacking of shared protocols) plays here is hard to do but I think the basic ideas are fairly clear, and familiar to networking people (don't smash the system, leave most intact and hijack it if you want to derive benefit, and/or do maximal harm).  (If you want to maximally harm patients, don't destroy hospitals and kill doctors, rather convince the doctors to use a set of protocols that do maximal harm but will also maximally persist.  That this was done by Hippocrates, Galen, and Rush, as in Rush in Chicago... who presumably cared about "doing no harm" is amazing...)
 
The other is that both evolution (however clever it is) and intelligent design (only done by humans) can be grotesquely myopic.  Hemorrhagic fevers and aggressive cancers destroy their hosts so quickly as to be essentially suicidal for the parasite/tumor, and do so in ways that are horrific for the host. (If you believe this is the result of supernatural ID, then the designer is truly evil.) Similarly medicine for 2000 years was horrific for the patient.  Really explaining the essential role that architecture (and its hijacking of shared protocols) plays here is hard to do but I think the basic ideas are fairly clear, and familiar to networking people (don't smash the system, leave most intact and hijack it if you want to derive benefit, and/or do maximal harm).  (If you want to maximally harm patients, don't destroy hospitals and kill doctors, rather convince the doctors to use a set of protocols that do maximal harm but will also maximally persist.  That this was done by Hippocrates, Galen, and Rush, as in Rush in Chicago... who presumably cared about "doing no harm" is amazing...)


Thus we need architecture for innovation (both in bio and tech) but we don't have billions of years to wait for it to evolve, and we also can't afford the "grotesque myopia" and all its attendant side effects.  But ID is no guarantee because it is hard to avoid unintended consequences.
Thus we need architecture for innovation (both in bio and tech) but we don't have billions of years to wait for it to evolve, and we also can't afford the "grotesque myopia" and all its attendant side effects.  But ID is no guarantee because it is hard to avoid unintended consequences.

Revision as of 00:13, 24 December 2011

In preparation for SIGCOMM 2012, and general directions on broader architectures.

The current content is taken from the group's emails on the topics. Please feel free to change the content or structure. Each revision is logged.

To edit, click the edit button on the top. You can check the instructions for formatting. For most editing tasks, you can simply look at the already formatted text.

Sigcomm Structure

[John] For Sigcomm we need to focus on layered and networked architecture, but to sharpen what we mean by that it would be helpful to have clear examples of "not layered" and "not networked" that are nevertheless effective architectures. In my PNAS paper I think I made "layered" deliberately too flexible a notion so that it described both garment/fiber and outfit/garment, which I claimed were very different.

  • [Dirk] I understand the 'not layered' aspect but don't get the 'not networked' aspect for a distributed system debate. For 'not layered', work like Haggle here in Cambridge could be interesting. Efficient though? That was the claim but there's little/no evidence.

[John] (I want to also make an aside that I need to distinguish between foreground reading, what we will directly build on for a Sigcomm paper and can assume reviewers and readers have also read, and Background reading, things that help is find examples and perspectives but we can't use too much because they are unfamiliar. Right now, Davrolis is #1 foreground, with the various Shenker papers. Everything else is background...)

[John] One issue I'm struggling with is describing the nature of the new "high waist" that separates a VM associated with content (whose subarchitecture is discussed in terms of OS and programming languages) versus a VM associated with resources (OR and control theory) that hides and manages the PM that all run on. It seems that this layering is actually more akin to VM on PM, than VM apps on VM OS. What could I possibly mean? I think Sloman could help with that, but right now I'm not sure how to make any of that clear.

  • [Dirk] See above - I don't think that layering is more akin to VM on PM - it's all layering at different levels of abstractions (which is what layering is all about, isn't it?). See also through some of the more recent cartoons I had added to the overall cartoon slides.

[John] There is a tension between wanting to be recursive (a la Day) and downplay the VM vs PM distinction, and emphasizing the profound discontinuity that happens at the VM/PM interface, wherever it may occur (mind/brain, digital/analog, active/passive, etc)...

  • [Dirk] Is there a discontinuity? I don't see Sloman implying this. I would argue that he's trying to find the conceptual framework that overcomes the 'seemingly existing' discontinuity. And I would argue that WE are trying to do the same?!

[John] If I haven't confused things enough already, another person who does stuff I like and has a completely different point of view (Walter has met her), is Carliss Y. Baldwin. The first set of slides on

http://drfd.hbs.edu/fit/public/facultyInfo.do?facInfo=ovr&facId=6417

are just perfect. The Apple vs Android battle now playing out is a great case study only too current for us to use effectively. As Wu emphasizes, it is policy ( courts, markets, etc) that will determine the outcome more than tech.

  • [Dirk] Absolutely, which is why it is important to think about optimization decomposition as much as thinking about tussle spaces (I used the word 'satisficing' in related work in the design for tussle space). Not all is an optimization problem per se but often a weighing off interests with court procedures implementing this process of 'weighing off'. In a system like the Internet, you cannot prevent going into the policy space. The current sigcomm outline has, therefore, allusions to the optimization work as much as to the design for tussle work!

[John] Our best example of grotesquely bad architecture will almost surely stay medicine up until the mid 1800s, but I'm getting better ideas on how to distill what a "bad architecture" is, and it would help to line up more simple accessible examples so we can maybe show what patterns are. IP is still the essential example, but it is new and a mix of good and bad, so it will help to show more range of pure bad to "pure good"??? (I think the most perfect architecture is the bacterial biosphere, not very helpful for Sigommm, but makes it a priority to explain. The VM vs PM story is great there, so I'm eager to drop everything and write a clear paper on this architecture, but I'll resist...)

  • [Dirk] One thing after another ;-) A 'purely' architectural paper would be lovely - this is unlikely to go anywhere in the nets community. But PNAS could be a good target for that. I believe that our specific SIGCOMM discussions though help for that bigger exercise.

there is a fairly clear story here to clarify what bio-evolution really is (as opposed to the cartoon), how it relates to architecture in the sense we mean it (even if only G&H think this way), why ID is important, etc. The next thread I'd like to address is the role of "theory" in all this.

Current Literature

The Evolution of Layered Protocol Stacks Leads to an Hourglass-Shaped Architecture

[John] It would be helpful it seems to propose an opposite extreme, a minimal model that is all about function and purposeful design, as a contrast.

  • [Dirk] It just needs a focus on what matters in a successful architecture: the ability to construct something against some metric of optimum. His paper doesn't give that. It simply says that every house finally converges to four walls with a roof, despite a wide range of inefficiencies in, e.g., insulation.

On Using Shapiro

Source: http://shapiro.bsd.uchicago.edu/genome.shtml + Attachments

[John] What this does for engineers is explain that evolution does not happen only by accumulation of small random mutations, with selection as the creative force, but instead that huge changes in genomes are possible and important (changes an organisms genomes can be comparable in size to the original genome itself). The genome is not a slowly changing ROM but a highly controlled RW memory.

This is compatible/complementary with Gerhart and Kirschner (who focus on development, metazoans) but has a focus on bacteria and mechanisms of "natural genetic engineering". G&H are much more architectural, whereas Shapiro doesn't discuss architecture much, just clever NGE mechanisms. (If you haven't read the PNAS paper circa 2007 by G&H on facilitated variation, that is a must read...)

We can add a lot to Shapiro's story, because NGE relies on shared architectures to work. If a bacteria gets a set of genes from elsewhere by horizontal gene transfer (HGE) it not only must be able to read the genes (shared codons) but the resulting RNA and protein products must plug into the whole protocol stack. Only G&H within biology focus on the nature of this stack, and they are just scratching the surface.

There are two obvious points to make here relevant to Davrolis, and one for the Shenker direction (and presumably many more, but these are obvious ones).

  1. One is that biological evolution is not like his model, though the mainstream cartoon version is (this is a great example of being misled by biomemetics, but we shouldn't be too harsh here as the G&H and Shapiro versions are very very advanced views of evolution). Both G&H and Shapiro say it is in fact extremely different than the cartoon, and though the mechanisms are well known (HGE and other forms of NGE), the "big picture" consequences are not, and the role of layered architectures is just in its beginning (so there are many papers for biologists needed on this...). But for us, the point is that we at least want to think about the real version of evolution and not the cartoon. The cartoon is now primarily a rhetorical and pedagogical oversimplification with little relevance to reality (and most biologists really don't care, they are too worried about their molecule of choice).
  2. The other is that both evolution (however clever it is) and intelligent design (only done by humans) can be grotesquely myopic. Hemorrhagic fevers and aggressive cancers destroy their hosts so quickly as to be essentially suicidal for the parasite/tumor, and do so in ways that are horrific for the host. (If you believe this is the result of supernatural ID, then the designer is truly evil.) Similarly medicine for 2000 years was horrific for the patient. Really explaining the essential role that architecture (and its hijacking of shared protocols) plays here is hard to do but I think the basic ideas are fairly clear, and familiar to networking people (don't smash the system, leave most intact and hijack it if you want to derive benefit, and/or do maximal harm). (If you want to maximally harm patients, don't destroy hospitals and kill doctors, rather convince the doctors to use a set of protocols that do maximal harm but will also maximally persist. That this was done by Hippocrates, Galen, and Rush, as in Rush in Chicago... who presumably cared about "doing no harm" is amazing...)

Thus we need architecture for innovation (both in bio and tech) but we don't have billions of years to wait for it to evolve, and we also can't afford the "grotesque myopia" and all its attendant side effects. But ID is no guarantee because it is hard to avoid unintended consequences.

Architecting for Innovation

[Dirk] Response to page 26 "Modularity is a basic tenet of system design... architectural modularity requires more than layering: it requires that interfaces be both extensible and abstract. By “extensible” we mean that new functionality can be added to a particular component, and utilized by other components that are aware of this change, without rendering unmodified components obsolete... For instance, interfaces should not pass network addresses or particular byte layouts, but instead should pass names and structured data"

This does point to extensibility but it does not bring home the point fully (and might even carry a dangerous side point): extensibility is important (we're saying the same) but it seems to imply that this should be done through extensible interfaces (option fields, schemed data instead of bytes/bit?) rather than extensible layering. The latter would be our message, wouldn't it? Now, the authors might have this in mind but focus in their paper on what they call the 'network API', so it seems to be all focussed on that. An architectural layering message extends this to an approach of layering/modularizing that is not limited to the 'network API'.

On the positive side, this is something to build upon since WE can bring this point home. It's a direct conclusion from the tussle paper, i.e., modularity matters and this modularity matters at design time, calling for an architectural approach that provides such modularity (here, modularity is a more general form of layering).

there is a fairly clear story here to clarify what bio-evolution really is (as opposed to the cartoon), how it relates to architecture in the sense we mean it (even if only G&H think this way), why ID is important, etc.

The next thread I'd like to address is the role of "theory" in all this.

One issue that Shenker doesn't address is that whatever architecture we have, it will be used for cyberphysical systems, and it's clear the difference in closing physical loops around, say, the power grid, is a totally different game. The new proposals are all mostly good news versus IP since they localize the lower layers and allow for completely different control systems... but this needs some attention.

Information-Centric Networking: Seeing the Forest for the Trees

[Dirk] The paper makes a very valid point about caching, something we've said in the PURSUIT project all along: caching through 'storing interest and data requests' is unlikely to be efficient nor sufficient. There is a role for managed caching, while this type of lateral caching is likely to be similarly ineffective as web proxying. That's why we've developed managed caching solutions in PURSUIT (ad were critiqued for it).

[Dirk] The paper underestimates the challenge in naming. Pointing to their own work, published in the SIGCOMM ICN workshop, they lay to rest the naming discussion, which I believe is utterly wrong. Their workshop contribution (on the separation of real world identities as long-lived identities and shorter lived labels) is only the start of the necessary discussion. If you want to get a hint on the extent of the discussion, you can ask Karen Sollins ;-)

Naming in Content-Oriented Architectures

Sigcomm Paper Components

and other food for thought...

Examples of Bad Waists

[John] there are some interest "fish parts" example, one of the most infamous is the giraffe neck and some nerves in it that used to be straight in fish but ended up in a tortured curve in the giraffe:

http://14-billion-years-later.tumblr.com/post/2962924626/the-evidence-for-evolution

[John] Burning fossil fuels is one. Settling conflicts by warfare is another fundamental one. Monotheism. QWERTY. Irregular verbs... IP is not a particularly bad example. Windows is maybe better?

  • [Dirk] In any sigcomm exercise, it's not only political interesting (necessary?) to point out IP. Many things are right there but, in a subtle way, pointing out the things that (architecturally) have led to problems is important and can provide a way forward. The CCR attempt had many points already that we can built on.

Virtual Machinery (re: Sloman)

See email for source

[John] For Sloman, the layering of VM apps on a VM platform (e.g. Apps on OS) are also very distinct from the layering of VM on PM (software on/in hardware), and I think this is a huge and important distinction, and I'm still not explaining it well. We so completely understand it that we don't seem to describe it sensibly.

  • [Dirk] Isn't that the very notion of layering itself with different abstractions along these layers while still all following a same 'thread' (it's seems to be something around information in Sloman's thinking, see also the mentioned slide 49 in set 2). To me, VM app -> VM platform -> PM is the particular instantiation in Sloman's architecture.

[Dirk] The most concrete point is the 'problem-specific' execution in running VMs (see e.g., slide 25 in set 2). And he emphasizes the role of information. The features of machines in Slide 11 (set 2) are good aspects to think about, in particular when he sees the boundary between machine and environment blur, something current systems are very bad at still. But some of his examples, such as memory management, are still too much immersed into today's system (this is where I find Kay's work more interesting since it embeds what Sloman calls NPDs more visibly into the physical structures through the associations being built. He does see memory management as context-dependent but does not seem to elaborate on the complexity of this context beyond remapping continuos memory blocks).

[Dirk] Another clear and interesting aspect is the runtime execution, something that's very underspecified so far in distributed systems (it's done but without great understanding from the design perspective). We tried to highlight this in the SIGCOMM submission but it's hard to explain and conceptualize well.

[Dirk] But his remaining hard problems on slide 39 (in set 1) give us the mandate really: role of information and how 'intelligent design' in evolution really looks like. It's a great motivation but it's not yet there for our consolidated view, I feel.

[Dirk] Slide 49 in set 2 is the closest to our concepts, I believe: it emphasizes the role of information, it introduces mediation (not specifically but still), it highlights the core functions of manipulation (internal and throughout a distributed system) and it highlights that the structural properties of the information is important (I translate this that you will need an approach that can accommodate a variety of app-specific ontologies). All of this coming together in an architecture/design!! Now THAT is what the current exercise is roughly about, isn't it? And these are roughly the concepts that are highlighted in Section 5 of the current draft.

[Dirk] At least two problems here:

  1. We won't have the space to elaborate in such lengthy way how we get to our concepts. The current shortcut is that of IP as a case study. We had/could have a section on other case studies - here is where we could segway Sloman into the picture. But there's always the avenue of other venues with a more bio/human spin, for instance. Not all will/need to fit into a single (SIGCOMM?) submission.
  2. Digesting Sloman is inherently difficult and my experience with the Sigcomm audience is not the best one when it comes to (i) understanding Sloman and (ii) wanting to understand Sloman. So there's a danger to loose the audience if Sloman's direction is presented too prominently (maybe not the best example but the basis for my experience was our Sigcomm submission which featured Sloman really prominently albeit likely inadequately presented - it didn't go well).
  • [Dirk] P.S.: one sentence I like particularly is "...that discovering what is possible, is a more fundamental function of science than discovering laws and correlations." on slide 15 ;-)

[John] For Sloman, the layering of VM apps on a VM platform (e.g. Apps on OS) are also very distinct from the layering of VM on PM (software on/in hardware), and I think this is a huge and important distinction, and I'm still not explaining it well. We so completely understand it that we don't seem to describe it sensibly.

  • [Dirk] He's very helpful for the much wider debate. There could still be a place for a discussion outside comms that could shine at important insights for the SIGCOMM exercise, i.e., a small section on insights from Sloman - but one needs to see how it reads in the end.