Wednesday, 9 March 2011

Complex systems and network theory in Connectivism

Hugely frustratingly for me, I was unable to access the Friday night Elluminate discussion due to some computer problems - I managed to listen to some of the discussion through DS106 but only managed to review the recording with all the chat and diagrams today. Over the last few weeks a common thread has been niggling me and so I hope to try in this post to outline what I see as an limitation of viewing learning purely in terms of the connections in a network. The discussion on Friday helped me to identify an antagonism that I think needs to be resolved between network theory and the theory of complex adaptive systems (CAS).

From the readings given on network theory and previous discussions in CCK11 around the idea of feedback, my impression is that Connectivism is heavily based around a specific type or types of network theory (Connectionist, Hebbian, feed-forward - what I'll call Connectivist Network Theory (CNT)) that appears useful for certain problems and has had a lot of success in modelling certain types of behaviours. The debate around "networks" or "systems" on Friday I think highlighted exactly the connection between this type of network theory and the limitations of viewing learning only from the perspective of the internal connections of that network. In my opinion the network theory that underpins Connectivism does not address many of the fundamental implications of a complex entity. And, if a network is not a system or does not need any connection to a system (as argued by Stephen Downes) then why is the theory of CAS relevant to Connectivism?

What I see as the limitation of the currently used network theory is that it does not address the dynamic nature of the entities, agents, humans, etc that are used to model the "nodes" in a network. And let's be clear that these are models  - learning is not "a network" any more than it is "a system" - we use these models because they are useful to a greater or lesser degree. The complexity model for me is a more accurate model because it specifically understands that there is a dynamic relationship not only between the agents in a complex entity but between the complex entity itself and the individual agents. In other words a complex entity is not just the sum of its parts but the product of its parts, its internal and external relationships and interactions. A complex entity learns from these interactions, adapting to circumstances and so embodies its history.

George Siemens suggested that networks can occur within some kind of system, and the example he gave was of a learning network which existed within an educational system, which had been set up by humans with a purpose in mind. I suppose this is probably closer to how I see networks in the sense that networks are the communication channels for some kind of larger scale system. Crucially, the larger scale system is both the emergent result of the connections in the network and also constraining mechanism for the network  (the notion of "enabling constraints" is a key feature of CAS and I have some thoughts about that for another post..) I think that if you extend Stephen's argument that networks don't need a purpose to the human domain then you would have to argue that learning doesn't need any connection to human systems (personal, social, educational, etc) - this argument de-humanizes learning I don't accept it as valid. We humans are ourselves complex entities, embedded in larger complex entities (families, institutions, organisations, societies, etc) and have nested within us further complex entities (biological, neurological down to subatomic processes). Learning takes place in society and therefore learning is subject to the influences of social and economic systems and the complexity of human biological systems.

The Connectivist premise is that it is the connections themselves which are the learning - I wouldn't necessarily disagree with this, however, by making connections, as humans we change (intellectually, emotionally, etc - and there is some recent evidence to suggest that learning causes physical changes in the brain) and as we change our connections change and our abilty to make connections changes. This all takes place within some kind of social system, whether formal education or otherwise - by learning as well as changing ourselves we actually change that social system at some level - as the system is the emergent result of the product of its component parts. In some cases our own "learning" may influence others' learning and through positive feedback loops may cause more obvious changes in the larger system. I personally just don't think that you can model a human network where the human "nodes" are static and just turn on and off depending on the connections to other nodes, human or otherwise - it doesn't take account of the inter-relationship between the whole and the parts. Perhaps I am misunderstanding and that is not how Connectivism sees humans-as-nodes-in-networks but my argument is that this is what CNT is suggesting.

The idea of back-propagation in CNT is modelled around a process whereby a signal is rooted backward from the output in order to adjust the signal more closely with an expected or defined output. This model is based on an assumption that the node is linear and will always react more or less the same way. As stated, humans-as-nodes are inherently non-linear and may react the same way to entrirely different signals or differently to the same signal. In the educational domain, "defined outputs" has a certain similarity to "learning outcomes" - expected responses that we are somehow trying to atune students to acheiving. To use this course as example, in CCK11each person is going to make connections depending on their initial set of connections (learning, understandings, interests). For myself, that means making connections between Connectivism and Complexity Theory - there are no learning outcomes and there is no defined output in CCK11 (are there learning outcomes defined for those who are taking this course for certification?). My understanding is the emergent product of all the interactions I have with my networks  - not just CCK11; this all takes place within my human life, within social and environmental conditions of which I am an interdependent part - so "my networks" involve many nested, interconnected systems.

In summary, my opinion is that CNT is not easily compatible with the theory of complex adaptive systems. That's not to say I don't find Connectivism a potentially useful model - I would just like to hopefully open up some debate around the usefulness of CNT. Also I can't quite grasp the intention of introducing complex adaptive systems but to argue that networks are not systems.

Discrete State Turing Patterns by Jonathan McCabe (used under Creative Commons Licence)


  1. Hi Graeme,

    I really like this post and your careful explanations of putting these ideas together. I have struggled since the beginning of the course with what you phrased helpfully here, the "dehumanizing" of learning. My approach also was to think about the role of the "messier" parts of being human, too: emotions, emotional connections (

    For me, "systems" somehow implies that dynamism more than "network" does. It's likely just a word-association thing, but systems, to me, move. They're potentially flowy or fractal-y. Last night I was reading a book where the author talked about her chickens "roaming around, rolling in the dirt, fighting in pairs, and periodically forming a big chicken molecule."

    "System" also implies to me the spaces between the nodes that aren't "connections." For instance, I'm interested in field theory, so when I see some of the diagrams Stephen or others create about connectivism, I immediately start wondering about the seemingly "empty" space between. Haven't physicists taught us there's no such thing as nothingness?

    To answer your question, too--I'm taking this course for credit, I've enrolled in the certificate program (having a few moments of questioning this now), and there is nothing at all different about the course itself except that enrolled students must do the 3 assigned papers and the final concept map. And we're graded on them.

    Thank you--I will be thinking about your post for awhile.


  2. very helpful in connecting it all (weaving my own personal net-work)... bear in mind that some networks function less effectively than others... some may even be dysfunctional

    1. I also like this Ferris mower parts very much.I think this is more effective.

  3. Leah / Vanessa - thanks for your comments.

    The dichotomy of Networks / Systems does remind me of the wave / particle duality in physics. How you set up the experiment will determine the result that you get. I think both models have value and what we really need is a better integration of the two models (if that is possible). I've been re-reading Kozybski again and one thing he talks about is that to be a useful model the model has to have structural similarities to whatever it is modelling. I think that both network models and system models could explain the multiple, interconnected and interdependant connections in learning, but for me personally, I see the iterdependency more explicitly in the system model - Leah, perhaps this is what you are sugessting too?

    Vanessa - I am still trying to get to grips with the idea of optimising networks. I can understand the concept without problem for an abstract, single instance of a network but I find it more difficult when there are multiple interdependent and inter-related networks. Changing the connections in any network must surely have implications for the optimisation of other networks?

    Best wishes