From the readings given on network theory and previous discussions in CCK11 around the idea of feedback, my impression is that Connectivism is heavily based around a specific type or types of network theory (Connectionist, Hebbian, feed-forward - what I'll call Connectivist Network Theory (CNT)) that appears useful for certain problems and has had a lot of success in modelling certain types of behaviours. The debate around "networks" or "systems" on Friday I think highlighted exactly the connection between this type of network theory and the limitations of viewing learning only from the perspective of the internal connections of that network. In my opinion the network theory that underpins Connectivism does not address many of the fundamental implications of a complex entity. And, if a network is not a system or does not need any connection to a system (as argued by Stephen Downes) then why is the theory of CAS relevant to Connectivism?
What I see as the limitation of the currently used network theory is that it does not address the dynamic nature of the entities, agents, humans, etc that are used to model the "nodes" in a network. And let's be clear that these are models - learning is not "a network" any more than it is "a system" - we use these models because they are useful to a greater or lesser degree. The complexity model for me is a more accurate model because it specifically understands that there is a dynamic relationship not only between the agents in a complex entity but between the complex entity itself and the individual agents. In other words a complex entity is not just the sum of its parts but the product of its parts, its internal and external relationships and interactions. A complex entity learns from these interactions, adapting to circumstances and so embodies its history.
George Siemens suggested that networks can occur within some kind of system, and the example he gave was of a learning network which existed within an educational system, which had been set up by humans with a purpose in mind. I suppose this is probably closer to how I see networks in the sense that networks are the communication channels for some kind of larger scale system. Crucially, the larger scale system is both the emergent result of the connections in the network and also constraining mechanism for the network (the notion of "enabling constraints" is a key feature of CAS and I have some thoughts about that for another post..) I think that if you extend Stephen's argument that networks don't need a purpose to the human domain then you would have to argue that learning doesn't need any connection to human systems (personal, social, educational, etc) - this argument de-humanizes learning I don't accept it as valid. We humans are ourselves complex entities, embedded in larger complex entities (families, institutions, organisations, societies, etc) and have nested within us further complex entities (biological, neurological down to subatomic processes). Learning takes place in society and therefore learning is subject to the influences of social and economic systems and the complexity of human biological systems.
The Connectivist premise is that it is the connections themselves which are the learning - I wouldn't necessarily disagree with this, however, by making connections, as humans we change (intellectually, emotionally, etc - and there is some recent evidence to suggest that learning causes physical changes in the brain) and as we change our connections change and our abilty to make connections changes. This all takes place within some kind of social system, whether formal education or otherwise - by learning as well as changing ourselves we actually change that social system at some level - as the system is the emergent result of the product of its component parts. In some cases our own "learning" may influence others' learning and through positive feedback loops may cause more obvious changes in the larger system. I personally just don't think that you can model a human network where the human "nodes" are static and just turn on and off depending on the connections to other nodes, human or otherwise - it doesn't take account of the inter-relationship between the whole and the parts. Perhaps I am misunderstanding and that is not how Connectivism sees humans-as-nodes-in-networks but my argument is that this is what CNT is suggesting.
The idea of back-propagation in CNT is modelled around a process whereby a signal is rooted backward from the output in order to adjust the signal more closely with an expected or defined output. This model is based on an assumption that the node is linear and will always react more or less the same way. As stated, humans-as-nodes are inherently non-linear and may react the same way to entrirely different signals or differently to the same signal. In the educational domain, "defined outputs" has a certain similarity to "learning outcomes" - expected responses that we are somehow trying to atune students to acheiving. To use this course as example, in CCK11each person is going to make connections depending on their initial set of connections (learning, understandings, interests). For myself, that means making connections between Connectivism and Complexity Theory - there are no learning outcomes and there is no defined output in CCK11 (are there learning outcomes defined for those who are taking this course for certification?). My understanding is the emergent product of all the interactions I have with my networks - not just CCK11; this all takes place within my human life, within social and environmental conditions of which I am an interdependent part - so "my networks" involve many nested, interconnected systems.
In summary, my opinion is that CNT is not easily compatible with the theory of complex adaptive systems. That's not to say I don't find Connectivism a potentially useful model - I would just like to hopefully open up some debate around the usefulness of CNT. Also I can't quite grasp the intention of introducing complex adaptive systems but to argue that networks are not systems.
Discrete State Turing Patterns by Jonathan McCabe (used under Creative Commons Licence)