Peeragogy dev (December 2020)

Continuing the discussion from Forming a Moderation Team + Improvements to OpenGlobalMind.com:

(Splicing my follow-up: that’s a fun move.)

@lovolution and @skreutzer had mentioned that CSC is online now! I think I’ll hold off signing up until the end of December. I can come for a visit in the new year! Hopefully I can announce completion of these Peeragogy migration tasks, preparatory to building version 4 of the Handbook.

  • (1—DONE) Google Groups → Oregon State University Open Source Lab (OSU OSL)-hosted GNU Mailman list (opt-in: here).
  • (2) Master copy → Migrate to Org format as in PeeragogyORG repository.
  • (3) peeragogy.org → Migrate to Firn and to new hosting provided by OSU OSL.
  • (4) Editing → Prefer Logseq and Org Roam in Emacs (these should be interoperable, some setup required).
  • (5) LaTeX/PDF → Migrate to Tufte style.
  • (6) Google Chat → Close down in preference to Zulip so that all discussions are in one place and properly archived.

Once that’s ready I’d likely have energy to do some work on the OGM website; though, maybe what’s really wanted is someone with front-end and design skills. At the level of network plumbing, we could look into networking the Peeragogy Zulip together with the CSC Agora, and others, into another location, via Matterbridge, to create a shared feed of key announcements (should that be useful). Maybe this could create a nice feed to put on OGM for example. I know that @robert.best has worked with Matterbridge before.

Regarding the tech stack outlined above and the Peeragogy Handbook v4:

  1. The changes outlined will mean we have a fully free software setup, which is something I’ve been wanting for a while.
  2. When people are up to speed, it should also help to remove me as a bottleneck for the project, which is another major longstanding and un-dealt-with objective.
  3. We also have major plans for content updates (not covered above).

If people are willing to look ahead to v5 of the Handbook, I have in mind a more ambitious data-modelling approach that can make sense of lots of snippets from disparate sources. I’m currently thinking to build it using Crux, having learned from another project Kosa that I was trying to help as a volunteer (but didn’t get very far because I got overwhelmed with other things). It might be interesting with regard to the knowledge management tasks that OGM will have on the horizon. E.g., perhaps a successor or add-on to the Semantic MediaWiki.

tl;dr: I foresee solidifying a lot of tech related to Clojure in the next year or two, and I’d love it if others happen to get interested in some of this as time goes by. But, for now, I think we still need to work a lot more on the overall system design. I filed this under “The Human Side” because I’d appreciate dialogue about the design aspects (subject to overlapping interest!).

2 Likes

Great stuff, and insofar it might be of broader technical/infrastructural relevance/interest for OGM/other as well:

Which repository? The one of the Peeragogy Handbook v3/v4? If so, what would be point of the migration be? Isn’t the master copy in Markdown, with the reationale/reasoning that GitHub/HackMD has rendering for it as a default? Is that also the case for Org-format? At the same time, aren’t both Markdown and Org-format (and Wikitext and many others) without support for semantic bootstrappping, so what would be the point of switching, one arbitrary primitive syntax vs. another?

Notice that “free” (as in liberty) means for online services that the user should be allowed and enabled to set up and run their own instance, and while copyright licensing for software/tools don’t expand to data/content, it’s a mute point if users theoretically could set up their own instance, but the data/content itself remaining non-open or technically/administratively restricted from export/import/conversion. But then, if it’s mostly throwaway communication or content of a dialog exchange type, probably not that relevant/interesting/useful anyway.

v3, v4 or v5, Peeragogy might contribute to OGM in terms of experience + implementation of patterns and pattern languages, notwithstanding that OGM started their own, separate variant of such as well.

As I learnt from the call recordings, it sounds like it’s not really that semantic, just graph-based (which might be somewhat nice/useful for the relation of patterns, but lacks again the semantic bootstrapping needed for augmentation).

I guess no experienced developer would jump into a language war, nobody really cares, doesn’t really matter, but for OGM, is there a particular benefit/rationale to look into Clojure as well, assuming that there’s probably not many stacks and existing code bases in Clojure?

Yes, I think it’s reasonable to say that an objective of this will be for everyone to be able to set up their own Handbook or similar data structure (for arbitrary topics), using the same tools and teachable/learnable methods that we use.

Yes; furthermore, I regard Peeragogy as para- to OGM, so it’s a good opportunity to do paragogy. My role model here, I suppose, is Socrates.

Whether Socrates’ project of engaging the young people of Athens in philosophical conversations was a corrupting one lies in whether we think that this kind of dialectical relationship with our own city, country, values and ‘gods’ is a healthy one. — https://www.philosophy-foundation.org/corrupting-youth

Perhaps Arxana can take over here. It may still just “be” a graph at the level of data representations. But richer tools could be deployed in concert with that. E.g., within Hyperreal Enterprises we’re talking about how moving to a word-vector representation gives us the basic foundation for adding more complex structures on top; perhaps something like this is what you’re thinking of? — I hadn’t yet considered adding this to Arxana but it would fit in nicely — each word would explicitly have a number within a given model, and as well, each character would have a position within each buffer. As such things are addressable in space (like Xanadu) and also in meaning-space. From this we might bootstrap things like recognising ‘patterns’ in text.

Not long ago, I spoke with someone who was strongly against using Clojure b/c it’s too hard to find Clojure programmers. But, if I can “create” (peeragogically) a talent pool, then we’d be good to go in that regard. I’ll certainly have a further think about whether it’s actually “better” in any way than using other existing more mainstream technologies. I don’t have an immediate answer. But…

lisplogo_256

No, it’s nothing like that, not related to this typical graph/AI interpretation/approach. More like a Web browser augments text. No need to prematurely care much about the exact identity or meaning of a single word for reasoning/inference/deduction, which is hugely complex and of questionable use/precision, and instead it’s more about types/categories of words or any larger spans of text, so a user interface or processor can get a grip/handle on it and offer/perform certain operations.

I don’t think these metrics practically matter much (very roughly speaking), if a language encourages shorter code and more reuse and dynamic, interactive programming and if it’s good for concurrency vs. the advantages of other languages, except if there’s a particular environment/setup constraint or something. Personally, as I went into developing an interpreter for a different list-processing-like language (and likely doing a bad job at it), wonder how it would go for doing one for LISP (not Clojure) as an experiment/exercise, knowing well that this is very cliché, typical, not another/again!

The trouble with languages that cannot easily be fixed/improved are, for example, related to adoption/distribution/deployment. While they all compile down to chip instructions, the user still has to get and install it first, and if it’s interpreters, well, then the interpreter needs to be installed in advance, plus the scripts. Probably not very popular to run Clojure client-side in the Web browser or on Android? Want to run a Visual Basic application on a GNU/Linux box? What about Pascal on the server? Chances and hoops to jump, for being realistically practical? And then, it’s also the many existing components (as well as earlier investments into them, including monetary commitment to rent licenses, lots of business logic developed, user expectations) and interfacing with them if one doesn’t to re-invent these for no other good reason than a different language, like window managers - what if there’s no binding to Qt? What if the compiler/interpreter wasn’t ported to your operating system or doesn’t compile for your chip? What if setting up a connection to a particular database implementation is a huge mess? So sure, one picks what’s well-supported and easy within the language, which is what other people spent their time on to make it work (work well, hopefully) or re-invented for the language, so it ends up vertically integrated stacks at the exclusion of everything else which doesn’t happen to interface well, technically, support-/reliability-wise or in terms of language/usage paradigm.

In an “Open Global Mind” context with many people using many tools/stacks/languages, or an potentially unrelated context of hypertext system infrastructure, for compatibility, co-operability, interoperability, I’m way more interested in interfacing, to ideally make a whole bunch of different parts/components/capabilities work together, exchanging data/messages with some code in whatever language in between doing the processing of input/output (very roughly speaking – guess you can easily see the challenge/problems with even trying). Like the Turing test, for better or worse: the caller don’t need (and even can’t) know what exactly did the data processing in which language and where, if the exchange/communication happens through an abstraction interface. Is that a good or a bad thing? At least with components/modules (not to get into earlier framing like interprocess communication/Interop or more recent microservice architecture), they don’t need to spread/inject their assumptions/paradigms/dependencies of the language to other participants, while each module remains maintainable on its own, for its own.

Are the ideas here (below) maybe helpful for “semantic bootstrapping”? As you talk about above you’re less interested in marking up individual words or characters; maybe regions (linked with certain meaningful practices) are more aligned with what they talk about here. But, I don’t understand what you mean by “typical graph” approach. Is it important that the browser augmentation (e.g., with the markup above showing that you quoted something from me) is different from a graph?

Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates. Despite the incredible effectiveness of language processing models to tackle tasks after being trained on text alone, successful linguistic communication relies on a shared experience of the world. It is this shared experience that makes utterances meaningful. Natural language processing is a diverse field, and progress throughout its development has come from new representational theories, modeling techniques, data collection paradigms, and tasks. We posit that the present success of representation learning approaches trained on large, text-only corpora requires the parallel tradition of research on the broader physical and social context of language to address the deeper questions of communication. —
https://www.aclweb.org/anthology/2020.emnlp-main.703.pdf

Clojure is pretty closely compatible with something called Clojurescript (which looks like Clojure but compiles to Javascript) so, even though people don’t run “Clojure” client side, programming in Clojure+Clojurescript is a quite comfortable development paradigm.

Probably a need here to be more precise/nuanced:

I consider that more or less solved, as for one it’s easy to just count characters/words which also doesn’t need server-side/remote cooperation (except for changes to the source, then it becomes about versioning or hashing etc.), and then I have an implementation of Ted Nelson’s EDL, and additionally there’s Web Annotation for which I have a few parts too, independently from Hypothes.is of course (that’s the point).

As the Web browser can use CSS classes to augment/render parts of the page, indeed as the quotation here (but less so the reference/link that it’s from you - surely added in the same manner/mechanism, but then, why get into the complexity of disambiguating all Joseph Corneli’s ideally across all online services/accounts, and then infer at which companies you worked at which time and all the papers you published, and connecting that to all the places you traveled to at which time, so that huge, fragile construct, given that the data is even correct, allows answering Siri questions if it works out well enough, while we can’t augment our very own simplest data in any way). For example highlighting people’s names or a box for their profiles, etc. In Peeragogy patterns, how the pattern template groups together the text that belongs to a particular section of the template (one might then automatically extract all the #Context ones for an app that allows quick lookup/search, and on selection of one pattern based on its #Context, the full pattern can be shown, but saving the need to look through all of them in their entirety).

About natural language, I won’t repeat the long ramble in the Zulip that they’re dynamic, not formal, so good luck with trying to use machines for computing the potential meaning of words/phrases (also, what tone, which context, etc.), right? Likely good enough for some useful approximations, but I’m way more interested in all the other equally important but dormant tooling that got abandoned, scrapped, ignored because AI/NLP became the new hot hype.

Without looking it up, blind guess: it’s a LISP interpreter in JavaScript that gets shipped from the server side, right? Sure, anybody can do that, we too! :slight_smile: We might ship custom little DSLs and command languages for/in our own pages/services. These days people interpret PostScript in JavaScript, or emulate entire retro computers. The thing is that browsers don’t come with ClojureScript pre-installed, so it needs to come from somewhere, and then as it’s delivered from the server side, it must be trapped in the browser sandbox for safety of such remote-code-execution, fine (system-in-a-system reinvention fallacy). And we could do that for Visual Basic and Pascal and eventually everything. Just because browsers are controlled/owned by very few vendors, mostly rich Web companies, where for historical coincidence Brendan Eich added a sloppy hack of a scripting language into he invented within two weeks, and that monstrosity is now the new portable “operating system” also switching everybody into SaaS dependencies for both, lack of proper architecture as well as for profit.

Addition: actually it reads as if you’re accidentally making a case for JavaScript as a language to switch to and train, not ClojureScript :slight_smile:

OK, I think I understand more. Indeed the themes from the Zulip chat about+critiquing “Arrival” are relevant for my understanding. It seems to me (using a different term) that you’re interested in an applied “practice theory”. Whereas, the typical graph approach that is used/useful for answering Siri questions is more about “knowledge representation” (which builds on existing static representations). The reason that practice theory is different is that it’s linked with processes and actual behaviours.

If I’m onto something at all, then I’d say this is a point of alignment between us; of course each of us also has his own experience and viewpoint (e.g., my background with Emacs programming or reading 20th C. philosophy no doubt shapes the kinds of things I might say about “practice theory”, including using that phrase!).

Apologies, it’s always really hard to communicate some of these things, and if I and somebody else says “semantics”, we may mean different things or focus on different aspects, and never take the time to carefully disambiguate/explore.

Sure, none of these things are clear-cut, always a mix, sometimes I just go with the less popular and try to promote that for balance.

Forgot to clarify “typical graph approach”: I’m kind of joking that everything is a graph, as everything can for sure represented as one. Text is a graph, just as a linear sequence of one character connected with only one next node being the next character :slight_smile: (Similar to the other joke of a circle still being just a line :slight_smile: or graph for that matter) But it’s understood/assumed that if people speak of graphs (as it would be of little meaning/use), they don’t mean it in that sense of strict, abstract graph theory, but they imagine the typical messy web/network visualizations, and I’ve come to the suspicion that this is hugely informed by a 1940s notion of how the brain might work, with places/regions of activity if the tester is shown a particular image, and these being connected via synapses, if only we could recreate that in software/data! And for sure with the much older tools of taxonomies, not making them hierarchical but interconnected graphs, it goes quite some way (but not the entire full way, I would assume). The big unsolved, rarely admitted/addressed problem there is of course how to cut the nodes and on what metrics to connect them – hypertext too after it’s very early pioneering stages had the phase of realizing/recognizing the notion of “lost in hyperspace”, not only the confusion if everything is connected to everything and how would that be still useful/meaningful/navigable (that’s the popular point of the notion), but also of how to split otherwise linear text into snippets that still match up with other snippets if connected, and remain still readable/coherent, and prove to be any better than regular linear narratives, etc. (that’s just my personal booking of this reason as a contributing cause for the “lost in hyperspace” effect).

1 Like

Maybe we could say that “everything is a graph” up to a certain point. On computers, most things are represented in logical terms, so trees and graphs are natural data structures (viz., parse trees like LISP and or matrices/datatables). It can add some extra baggage re-represent some things as graphs, but it’s generally possible. Higher-dimensional discrete structures are sometimes more intuitive.

In nature, we have things like subterranean networks (I’ll email you a copy in case that’s paywalled).

1 Like

“up to a certain point”, is that in reference to subterranean networks? Will take a look. Until convinced otherwise, I wouldn’t be aware of something that can’t be represented as a graph, as it’s a relatively universal structure. OK, if a thing isn’t composed of atomic entities that can be connected, yes.

On LISP, from my experiments into writing interpreters (coming from DSL format parsers upwards), I wonder if LISP really actually is a parse tree (in design, implementation and/or practice). For the other language I was looking at and some videos/theory, I think there’s also the notion of list processing, and I’m not sure how all of these relate or not, but I think the idea is to come up with a pretty controlled, orderly syntax (especially for function calls), so the parser can cut up the names/identifiers and values/arguments, and subsequently put them into their corresponding built-in lists/tables, so an interpreter gets much faster with running through these on lookup/invocation/execution, because it doesn’t need to parse the input every time anew (like keeping the AST model in memory, but then optimized/optimizable to fairly fixed structures, so you can always know the amount of arguments or their start position etc. implicitly by the list as context the construct is in). Could be that all of this is long superseded/abandoned and some optimization for early slow low-memory computers, certainly in use in some places, maybe it’s somewhere in LISP or even a dominant concept, and the implicit parse/call tree just a mental model of/for the programmer? But on the other hand, the tree reflects the topological architecture of the program, which has a start/entry point, etc. (as execution of any sequence of statements, no matter if in jumps, loops, is linear I guess except for concurrency/parallelism). The Alan Kay/Smalltalk things might point into a more dynamic, interactive environment though, could be that LISP is somewhat similar.

But OK, recognize that all this is off-topic and while interesting/relevant (to me at least), if it were made easy for components to interact, developers might be able to write in multiple languages (including some of their own, also Ward Cunningham), with less stress on homogeneous stacks, learning programming vs. learning a language. Right tool for the right job. Of course there’s huge benefits to homogeneous, standardized language use too, node.js comes to mind, or indeed increased portation/integration of LISP interpreters into other stacks like I understand Clojure is onto Java and ClojureScript onto JavaScript (minus Java libraries?).

Really sorry, will be a pain for the beat reporter, maybe cut the exchange out from here and put it elsewhere, or not needed any more at all? Or flag it for “omit” in beat reporting + OGM?

Oh, I see, the subterranean networks is an argument for fuzzy logic and analog data, which maps very poorly to graphs as an abstraction of clear, discrete boundaries and explicit, well-controlled states. Would absolutely clutter/pollute a graph if somebody were to reflect the readings of a high-precision sensor as separate notes and connecting them to the reading of a time source, which is clearly not a fitting, adequate use case for graphs as an instrument/tool.

Update: OK, from the article, you could put every tree as a node and the fungi as connections/edges (mostly benefiting from a tree being fixed in space by its surface appearance, and as such identifiable as separate from other trees). But trying to also reflect/model the fungi, roots, soil, temperature, maybe there’s some transmission going on with animals/insects/pores-in-wind, etc.)… In general, intuitively, the physical world and nature tend to have plenty of these non-obvious/hidden, hard/impossible to measure, non-formal/-deterministic (fuzzy/analog) things of high complexity and fine systemic as well as dynamic tuning/cooperation/adaption going on. If there’s a simplistic explaination/model, it’s almost certainly wrong or incomplete :slight_smile:

Update: what if looking at trees primarily as an organism of roots, which just happens to have some surface periphery to catch itself some resources like light and carbon? If one cuts off the surface periphery, it grows another one anew, but the cut periphery is dead material without it’s center/organism, which is the roots. Just humans as surface-level beings only perceive trees as above-ground organisms, that’s where the focus is, how we perceive/interpret them. OK, admittedly, roots get some water to grow the surface-level parts, bridging a break in the medium.

Update: the article goes nonsense as soon as the speculation about individualism vs. cooperatism or “superorganism” as well as Darwinism and “gene evolution” theory starts. Guess something can be said about different categories/types of organisms, and the modalities/ratios/interactions of individualism and/or cooperatism. It’s all of these things going on at the same time in parallel, to different and/or even varying degrees. Update: everyone knows and understands that parasite plants are known for quite some time, yes? Whether symbiotic or not, certainly that’s what many plants do – not consciously/actively, merely as a matter of adaption to the circumstances they happen to encounter at hand. Update: a cynic could say that it’s only humans who have this mechanism of abstract logic with makes it hard for them to synergize :slight_smile: Update: are plants naturally intelligent by mere setup of their mechanism(s) (don’t want to say patterns, more organic Alan Kay microbiology patterns or dynamic, adapting pattern language)? Update: the whole ramble there about active species that move, stages of animals with different capabilities, trees as living organisms and then dead matter, sure, that’s a categorization/hierarchy to group what’s similar in terms of action/ability, but then, there could be other categorizations as well which don’t artificially put these groups into such a hierarchy/scale, as they’re each organisms in their own right which only differ in strategy/implementation from other organisms in what/how they do their things. One day somebody might find evidence that rocks are “living” organisms in a sense and communicating/connected. Just because we can’t measure/observe it doesn’t mean it’s not real/existing, and be it via quantum entanglement or something without getting esoteric. And then sure it’s not how plants or humans are “connected”, just in a different way by overloading the term with a different meaning. Like people call their Web platforms an “operating system”, which it might be for the stuff it’s operating in service to apps on top, but not what was originally meant by the term for operating actual computer hardware directly. Example from the article: “trading” or “trees sending messages”, in communication that always refers to an active part submitting the message to a receiver, while trees don’t do this actively, awarely, consciously (as one would say a human is conscious), but in the plant world the organism has some effects going on which leads to independent receivers getting indirectly affected by it in some way (hence a “message” was received, and even if it is two-directional “exchange”, it’s not following a formal protocol as the term “exchange” would imply). Mixing up overloaded jargon leads to people inferring conclusions/assumptions which are incorrect, not justified, or may not translate well (and confuse/mislead instead). Update: and then, right after that, indeed, example of unsubstantiated “mother trees”. Also, it’s not that these trees go for an optimum or are aware of one or manage to find it, it’s just some effects going on for them individually, and if it so happens that they encounter new opportunities or circumstances, they adapt to what works for themselves locally including/involving cooperation with others, but there might also be serious misconfiguration and confusion in lack to communicate clearly, explicitly and coordinate accordingly). Question of “On a more fundamental level, it remains unclear exactly why resources are exchanged among trees in the first place, especially when those trees are not closely related” is just dumb – resource exchange is what simply happens, as these organisms are not able to actively start or stop any of it, it’s more a result of their mechanism/design/pattern. Humans know ways to exploit that in many ways for a long time – don’t know for sure, but imagine how natural rubber is harvested or maple syrup, would likely work with roots and fungi as well in some way, but extracting flows from a tree, it might in turn even start to produce more, because there’s a deficit/loss/leak in order to balance/compensate. I would bet that you could “connect” to a tree and “trade” and “communicate” with it, and what would it say? Only “talk” and care/trade about tree-like/-relevant things. If one “listens” carefully, it might “tell” you where to find the water, or which other trees in the forest are unhealthy and soon about to collapse. You could tell it some lies if you wanted of course, and the tree would believe all of them without question. Stopped reading the article roughly in the middle, part because the layout and screen reading really gets in the way, and then don’t care much about the confused nonsense about evolution of genes and speculating on how that might relate to altruism and selfishness and that kind of typical, popular, but false confused nonsense. It’s not scientific, but wild speculation/interpretation in article publishing. It’s abstract theories that don’t match well with reality. These plants are not aware nor in control, just if something works out well, it delivers the resources to increase the effect/flow even more, as positive/negative feedback loops. It’s a pattern/systems thing, not a social construct or something. Their information processing and decision making is different from a computer or a human brain. Assumptions which are true for one of these might not at all be for each/any of the other. Update: continued to read just for completion/fairness/rigor (despite expensive/costly). Guess nobody in Europe is doing clearcutting as part of regular foresting/logging as it would just be devastating and not sustainable also for the business within the individual nation states (on the other hand, guess Europe is importing some wood or wood products from Russia, logging Siberia with North Korean forced cheap labor or something – who knows if any of that is true, and to what extend). Isn’t the claim that the Romans did cut down all of their forest in order to build their galley navy, which is why they’re mostly in hot climate since then? Update: “Organism” is a setup/structure/system which channels/organizes energy/flows into some effect like growth or movement, in contrast to dead physical matter. Higher-level organisms have a mechanism for feedback loops in order to control/direct the allocation/flows of energy in order to optimize towards certain optimums like for example stability or speed. Meta: Good example why I personally need some reading + writing tools to help me with my sensemaking. Apologies, was expensive to read the long, inconvenient article, to write down notes/comments, no time to condense, no tools for annotation as well as keeping the referenced original intact because of copyright and lack of tooling. Article currently has 239 comments, likely all uncurated, no need to add mine as it’ll be lots of duplication as well as even more confusion and material to comment on.

Something along these lines was presented at EmacsConf 2020 (backed with an editor-independent implementation); details here: https://emacsconf.org/2020/talks/23/ (“Incremental Parsing with emacs-tree-sitter”); backing implementation: https://tree-sitter.github.io/tree-sitter/

1 Like

Like the browser keeps the DOM in memory for subsequent manipulation, except HTML is not a programming language in terms of being Turing-complete. Would assume that Emacs likely doesn’t one-time render and throws away the LISP or Org-Mode either, or maybe it’s re-interpreting scripts/code every time anew. Or maybe there’s an optimized intermediate language/representation/“bytecode”/virtualization.

1 Like

Note: This message is not related to the original topic of Peeragogy dev (December 2020), nor I’m particularly interested or well-read on the topic of Mycorrhizal mutualist dynamics, but I read a (translated) interview (in Hungarian) a while ago with Jill Butler and Ted Green while they were visiting an area with ancient woods in Transylvania. They touched similar topics as in The Social Life of Forests, which, as I understand are in alignment with the controversial / non-scientific, yet popular among non-scientific circles: Das geheime Leben der Bäume. Recalling this interview prompted me to add some of my thoughts (to reiterate: these last messages of this thread probably should be under another forum topic or a side chat)

A thick willow branch, given enough moisture (no soil) can “survive” for several months, and grow roots after, becoming a full-fledged tree (weeping willow).

A counter-example would be Romania, see: Clearcutting in the Carpathians or Romanian National Parks victims of deforestation - something I witnessed many times, even in the forests around my hometown Sovata

You’re possibly right, but also worth considering 2 papers published in Nature, where Suzanne Simard is a co-author: Defoliation of interior Douglas-fir elicits carbon transfer and stress signalling to ponderosa pine neighbors through ectomycorrhizal networks and Net transfer of carbon between ectomycorrhizal tree species in the field.

A related article to this topic: Facts or Fairy Tales? Peter Wohlleben and the Hidden Life of Trees

2 Likes

i guess @Klaus could provide architectural inputs here

interesting conversation, where are you going with it? Here some thoughts from an article already posted:

Why This Matters: Simard’s research has provoked “one of the oldest and most intense debates in biology: Is cooperation as central to evolution as competition?” Jabr writes that “some scientists have advocated, sometimes controversially, for a greater focus on cooperation over self-interest and on the emergent properties of living systems rather than their units.” Is there a greater lesson here for how competition and the tragedy of the commons have made the planet sick, and how we humans will need to heal it through cooperation?

There is consensus that trees do indeed have a symbiotic relationship with other plants, and even insects, birds, and mammals.

Researchers map symbiotic relationships between trees and microbes worldwide

SYMBIOTIC RELATIONSHIPS IN THE RAINFOREST

1 Like

here’s an invaluable syllabus from Howard Rheingold:

Toward a Literacy of Cooperation: Introduction to Cooperation Theory

http://socialmediaclassroom.com/host/cooperation7/lockedwiki/main-page

Is cooperation as central to evolution as competition?

In the Asian cultures the answer to that question is embedded in their way of understanding life itself:

The Essence of Yin Yang

The Dalai Lama on Quantum Physics and Spirituality

One is not possible without the other.