Hal Fan Hour

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Monday, 18 February 2008

The Reality of Virtual Learning

Posted on 12:31 by Unknown
Presented to the Defense Learning Academy, Cornwall, Ontario, January 30, 2008. The slides and audio of this presentation are available here.


There’s always a danger when you come in and you do a talk like this: the idea that you’re presenting something, it’s the facts, the truth this is the way it is, I’m going to lay it out on the line, I’m the expert, the guru, you’re the mob, etc.

That’s not how I view this material at all. So I want to preface this. I’m looking at a particular slice of online learning, not the whole field of online learning, obviously.

There are many things, including a wonderful helicopter simulation in Gagetown that I got to fly once and crashed very quickly –

Voice: seven seconds.

Is it still a record?

Voice: yes. [Laughter]

All the way across to the Second Life stuff that you saw, to the Company Command software that you heard about yesterday, from Nate Allen, and the rest.

This is just one small part of it and it’s one person’s perspective on it. It’s not all a seamless whole. It’s not a beautiful step-by-step narrative that wraps up nicely and neatly in a bow like a Sherlock Holmes movie or short story.

It’s something that’s a little bit loose and scrambly and something that you should take, look at critically, analyze it in your own perspectives, take what you need and discard the rest and take it from there.

So I want to begin this talk by talking about reality. I know that seems like an odd sort of place to begin a talk. But perhaps it’s appropriate because finish a demonstration of Second Life.

A lot of the times we would be here among our own discipline, our own domain being talked about as a virtual reality. But my background is in philosophy and as a philosopher I’ve learned over time to look at reality in different perspectives and in different lights.

We often hear the phrase ‘the reality is’ as a rhetorical device and I’m sure you’ve all heard it, right? You have to have this great idea, “I think we should do such and such and so on,” but someone comes along and says, “Yeah, but the reality is...” Sounds familiar?

‘The reality is’ is the enemy of innovation.

When we look at reality, when we analyze what reality is, almost everything that we think is real (properly-so-called) is construction. It’s an artifact. It’s something that we’ve created. It’s a way that we understand the world. It’s a device that we use in order to understand the world.

Kant talked about ‘space’ and ‘time’ as the necessary prior conditions of perceptions. The idea here is that we don’t know space and time exists absolutely certainly for all time, but as perceiving, thinking humans the only way for us to make sense of our perceptions whatsoever is to come up with the concept of ‘space’, come up with the concept of ‘time’, to create a framework in order to place our many different perceptions and come up with some understanding of them, an explanation of them, a way of dealing with them. Hume called them ‘useful fictions’.

There are many ways of looking at ‘real’ in today’s world.

We think of ‘real’ on one hand, we contrast it with the ‘artificial’. So the real, it’s like the natural, the artificial is like the fake.

Now we can think of ‘real’ as ‘genuine’, as the actual thing or real as the fake thing that is not the original. It’s just a pale imitation. We have – “it’s the real thing,” and by implication, Pepsi is the fake thing.

(Pepsi should have done that. They should have come out with the advertising. I bet you it would have worked. Pepsi: fake Coke. [Laughter] That would have worked I tell you.)

These days of course we also have the contrast between the ‘real’ and the ‘virtual’. Analyze that with what we mean. Because the ‘virtual’ is every bit as real as the ‘real’ except that it’s not physical. But I mean it’s not like the things that are going on in Second Life are artificial or fake. Well, now, maybe they are.

How about ‘real’ versus ‘illusory’? We have the idea of real, something that exists. A mirage is not real. Not because we don’t perceive it. We do perceive it, but it’s not real because it does not in fact exist.

Now does this sense of real apply to something like Second Life? When we look at Second Life would you say, “Hm, that doesn’t exist?” Not really. That’s not the sort of thought that goes through our mind, is it?

We have ‘real’ versus ‘delusion’. The idea here is that having a grasp of reality is having a grasp of sanity, being able to make sense of your perceptions without all the static or interruptions that somebody who is deluded might think. Is there a great conspiracy? Do we need to wear tin foil hats? Well that’s not reality, is it? And hence we have the admonition to people, “Get real,” as “Become sane again.”

Now there are many ways to find reality, many ways of sensing, perceiving, touching, measuring, many different points of view. We have different perspectives, different models. The models tell us what things exist and what things don’t exist.

It’s an interesting thing. Here we have – well, look at this right in front of you. We have here a podium, right? We say this podium exists. This light exists. This light is part of this podium. It’s almost artificial the way we give this particular podium an identity.

It depends on a certain perspective, a certain world view. And so, as I talk about this think about what the “realities” are in real world. I think about all of the challenges that National Defense is facing these days and there are many different realities, shifting realities, changing realities that you’re dealing with.

Think about this. The ‘realities’, the constructs that we will accept as real, that characterize your institution, that characterize your students or your fellow staff, that characterize your values. What of your values is real? What of your values is artificial? What is the construct? What is faith? What is deluded? Your finances. My finances are very, very heartbreakingly real.

So, against that background I want to draw out the traditional conception of knowledge. The traditional conception of knowledge is exactly that conception that I tried to warn you against as I took to this podium today.

The traditional conception of knowledge is where things like knowledge, facts, values, and institutions are ‘real’ in all the full senses of the word and unchanging. They’re presented. Here they are. You will consume.

Even ‘change’, which is kind of ironic, is viewed as inevitable, is viewed as real, is viewed as something that is out of our control. People talk about change management. Change management is getting people to accept change. That is, something that they cannot control and have to deal with. That’s one way of perceiving change management.

The reality is, especially in today’s world - and I’ll have different ways of talking about that as I go through this talk - the reality is we define what counts as real.

We define it by our theories, our world views, our underlying values, our moralities, our religions, our perceptions, our beliefs, our intuitions, our different systems of logics, mathematics, all of these things combined to create a reality and this reality is different for each person and is very different for each person.

Look at all of you right now. The reality is based partially on perception. You’re all looking at me; true (it’s enough to give me a complex). But if reality is what you perceive, each of you is perceiving me from a slightly different angle which means that we’ve got about 100 different versions of Stephen Downes sitting in this room, which is really more disturbing. [Laughter]

Here’s another exercise and this is in a particularly relevant exercise for this organization. What is a ‘student’? What is a natural student? What is a genuine student? What is a physical versus a non-physical student? An existing or non-existing student. An actual or non-delusional or non-deluded student?

We have this way of dividing the world. We have teacher and students and never the twain shall meet, except in a classroom. But these categories, even the idea of real and unreal, existing or non-existing students, are beginning to change. We use the term fuzzified. Fuzzified, yes, is a word.

Here’s another challenge for you about real. What is learning? Particularly with the emphasis on testing and repetition and all of that, we may get the impression that learning is something that we have a very good grasp of, that we know what it is. It’s ‘remembering’, say or something like that.

But what is learning? What is natural learning? What is genuine learning? What is physical learning? Learning from experience as opposed to learning from virtual reality. Are those different kinds of learning, or is it the same kind of learning created in a different way?

What is existing learning? How can you have existing learning if learning isn’t a thing? You have a thing that isn’t really a thing; you can’t say that it exists. You see what I mean?

This is the problem with studying philosophy. You look at these words. You come into a domain like education and people say, “well there is learning and there were facts and there were textbooks.” You look in that and say look, “yeah, but the world isn’t like that.”

The reality is, learning has changed. Learning has changed from being about reality, from being about facts and objects and things and dates and names and ways of building things and ways of taking things apart, to being about ways of creating understanding, coping with, managing reality, to verifying reality, to being able to determine what is true in our perspective, what is not true in our perspective, to creating reality, to making reality. The skill that it takes to build something in Second Life is a type of learning, but it’s a learning how to produce, how to create.

It’s learning in an age that has changed. We used to live in a world that was very certain, that had a cause and had an effect. You knew what created a positive effect, if you wanted to cause it. These days, with multiple, complex, interrelated variables, even one thing means we sometimes don’t know what’s going to happen.

It’s learning in an age of obscurity where the reality is simply not known. The reality is beyond our perception, like Kant’s space and time.

It’s learning in an age of chaos with multiple, independent variables, where from the same cause you can get one of a range of effects. It’s like predicting the weather, which is particularly rough these days.

It’s learning in an age of change, where what was true today ceases to be true tomorrow.

There’s change that’s very much about the way we have to go about learning. The old model is – well, this diagram (the SCORM diagram) is just a few years old - the old model is less than a decade old. The old transmission model.

The idea of learning as content, as facts and you take the facts and you assemble them in some way and then you run through the gears of a learning management system and then - I love the delivery device shoots them into people’s brains, you know, as though if you shoot something hard enough it’ll lodge in their brain and become learning. That’s the old transmission model.

The problem with that model isn’t that it didn’t work. The problem is that it did work. The problem is that, that kind of learning shoots facts into people’s brains in a world where there weren’t facts.

Learning is not remembering. Learning isn’t simply acquiring facts because that will not leave you the capacity to deal with the world. It will not allow you to create ‘space’. It will not allow you to create ‘time’.

Learning - you can remember things without meaning. This is from Lewis Carroll – “T’was brillig and the slithey toth…” I remember those words. Haven’t a clue what they mean. So, would I be said to have learned those words if I have no idea what they mean?

Or, another example – mathematics. Everybody took mathematics in school and most people passed that. Yet when they go into the work place – I used to work in concession stands and 7/11 stores and things like that - every time an employee comes to the store we teach them a skill. We’d teach them what’s ‘counting change’.

The reason why we teach them to count change is that even though they’ve passed grade 12 mathematics, they do not understand that if somebody gives you a 20 for a 79 cent item, you don’t give them a 50 change. Because they don’t associate the mathematics with the value.

The syntax of the mathematic doesn’t associate the semantics of being a 7/11 clerk. So ‘making change’ is a process that ensures that the change corresponds to what was actually given to them. So if somebody gives you a 20 you count the money back and add it up till you get 20 again.

So they can learn math, oddly, interestingly without understanding what math is supposed to do. We actually see that all the time.

Learning is not ‘content’. I know, it used to be, “content is king, content is the web.” Learning is not content. It is not shooting those facts into your head.

Rather learning, as I characterize in this slide, is a process of ‘becoming’ rather than a process of ‘acquiring’. Now when I say something like that I sound like one of those German philosophers and I’ll just start making up words.

That’s not really what I want to do, but rather, what I want to say is, learning is a process of creating a particular mental configuration, particular set of connections between the neurons in your mind.

Learning is shaping yourself rather than acquiring something. To learn, as the slide says here, is to instantiate patterns of connectivity. So what you’re doing is like exercising. You don’t make someone strong by putting muscles into your arm (that’d be kind of neat, like Schwarzenegger, “I’m Ah-nold…”).

But you don’t put muscles into your arm. You have to grow muscles, you have to develop them and you do that through certain processes. I imagine you guys understand those processes much better than I do.

What learning really is, on this model therefore, is ‘not propositional’. It’s not a bunch of sentences. It’s not a bunch of facts. Very often, it is tacit, to use the words of Michael Polanyi, Personal Knowledge, ‘learning is like riding a bicycle’.

You could not describe what it is to know how to ride a bicycle. You’d sit there and you could write it out. Write sentence after sentence after sentence and the person could read all those sentences and still not know what it’s like to ride a bicycle.

Knowing how to ride a bicycle is, again following Polanyi, in an important sense ineffable - it is not expressible in words, and as an aside, this means that the efforts to (as they say) “capture” tacit knowledge are misguided. What they are attempting to do is take something that is ineffable - not expressible in words - and express it in words. Obviously when you do that you change something important. You’ve taken something that is knowledge and changed it into something that is not knowledge.

Learning and knowledge are also, very importantly, personal. What you learn, what you know, depends on context. That means in a very important sense you can’t generalize it. You can’t say because I know this fact about the world, this fact about the world should be equally known by every other person in the world. Because what I know about the world very much depends on my own perception, my own background, my own culture, my own experiences.

What knowing is – on this picture - changes as well. Knowing to know something used to be: to know the rules, to know the categories, to put things in their place, to understand the laws of nature. On the new picture knowing now is much more about patterns. It’s much more about similarities. It’s not knowing that a tiger is a kind of cat; rather it’s being able to recognize a tiger when you see it.

We can see how this works from the perspective of a network. We see both the aspects of recognition here and we see the aspects of ineffability, or tacitness, here. This is a network. It’s a very stylized network. A network of this type is properly known as a neural network (this is a term from connectionism more than a term from neurophysiology).

The idea here is that our perceptions, such as that perception of a tree, corresponds to a pattern of connectivity in that network. It’s kind of hard to see here, but you see the darker red lines that are intended to represent that.

So you see the tree and as you see trees over and over and over and over and over through your life, that creates a pattern of connections between various neurons in your mind. Obviously this is a gross simplification. That’s about 20 neurons and the mind has about ten billion or whatever neurons. So the actual patterns of connectivity are very different.

Now what’s important here is that no single neuron corresponds to the perception of a tree and there is no propositional representation of that perception of a tree. It is what we would call ‘sub-symbolic’.

Another aspect of a network that operates in this way is that the same network manages multiple perceptions. Here we have a network that has three separate perceptions. It can recognize three different types of objects. It can recognize a tree, which again is strong connections in the red. It can recognize this cute little puppy dog with the – oh, I can’t really see the colors - the different colored connections. Then it could recognize a couch. Again, different colored perceptions.

What’s really important here is to understand that the same network carries the representations of all three of those subjects. Those representations aren’t – they’re not like pictures. They’re not like words. They’re like patterns of connectivity.

What’s interesting about this is: change my perception of a couch and even though it is not in any way logically connected to a tree, it changes my perception of a tree because they (the perceptions) are operating in the same (neural) environment.

The mechanisms by which networks learn, the mechanisms by which networks form these connections, tell us about the mechanisms through which learning happens in humans and, for that matter, through which learning happens in society or networks of people in the world.

I’ve identified three major types of mechanisms by which these connections are formed in this diagram here.

One is Hebbian associationism, which is based on, as this slide says, concurrency. You have two neurons are active in it at the same time and not activated at the same time, a connection is going on between them.

(Second) based on that propagation, two neurons draw connected between them and then send the signal forward. From that signal forward a signal is sent back that may correct that original creation of the connection.

Then finally (third), the Boltzmann mechanism, which is a ‘settling out’ mechanism. We have a set of neurons and a set of connections connecting those neurons and the Boltzmann mechanism will try to find the most stable configuration, a configuration of lowest energy. Think about it by analogy throwing a stone into a puddle of water. The puddle of water will ripple for a while, but eventually it will settle out into the most stable configuration, which is, for water, flat. The mind works the same way. The connections go back and forth to create perception in a person’s mind. You can visualize the connections going back and forth, back and forth, and eventually the mind just sort of settles and it settles into the most stable configuration.

Obviously as I’ve talked about those principles of association I’ve glossed over quite a bit of detail there. But that gives us a picture or a model of what learning looks like in this network environment. I’ve represented it in this diagram. This diagram merges or combines two major functions.

On the one hand the function of teaching, which could be said to be ‘to model’ or ‘to demonstrate’, to present perceptions for other people to perceive, things for them to emulate, things for them to follow.

Then on the other hand, the process of learning itself, which is ‘to practice’ and ‘to reflect’. To practice is a lot like exercising. Then to reflect, which is similar to the Boltzmann mechanism, to try to comprehend, to understand putting the experience into context, into line with the previous beliefs, the experiences that you’ve had.

These four things combined together, modeling, demonstrating, practice and reflection, form the four cornerstones of the learning experience.

And then through that learning experience is the personalization or the context dependency of the learning experience beginning up at the top with the exercise of choice by the learner, which manifests itself in a definition of an identity for a learner, which in the end expresses itself as creativity.

Now, very important: this is a model. This is not the way the world is. This is a framework that I’m using in order to try to grasp or apprehend this process. I’m not coming here and saying, “oh you know, the whole world is defined by these seven elements.” That’s not what I mean at all. I’m trying to give you a perspective or a point of view on the process.

So we have, in summary, E-Learning 2.0, the idea that learning is not based in objects and contents and books and course objects and classes and lessons that are stored as though they were in a library, but rather the idea that learning is like a utility, learning is part of our experience, learning is something that flows, learning is something that’s dynamic, that changes. It is this network, both externally and externally reacting and adapting to a wide variety of perceptions.

So that leads us to the underlying concept of E-Learning 2.0. The first aspect of that concept is that learning is learner centered. That is to say it is centered around the interest of the learner.

I would also say – it’s not on here – but I would also say it’s centered around the perspectives of the learner, the situation or the context that the learner find themselves in, the job that they’re doing and all of that.

Learning is also important when it’s owned by the learner, rather, and again I’m just simply being the -- reception of content is something that is managed, created and deployed – bad word, I don’t like that word – by the learner. I don’t want to say ‘used’ because that would imply that it exists and I don’t want to say it like that either.

But you get the idea what I mean. It’s something that the learner does. The learner is not a passive receiver. The learner is an active creator of their own learning.

The second major aspect: we’ve seen this already in presentations today. It’s immersive learning. It’s learning by doing.

The third major aspect is, it’s connective learning. It’s connective learning in the sense that we’re creating connections in the mind, but it’s also connective learning in that the learning occurs through the process of creating connections with other people in the world: teachers, other learners, colleagues, whatever. And working with those connections, receiving information, sending information.

Learning is, as this slide says, based on conversation and interaction. Some examples - and again we’ve seen this already with Second Life and the simulations that you all use - game based learning.

Clark Aldrich has identified four major types of learning: the branching, spreadsheet, quiz game simulations kinds of learning.

Just as an aside, for those of you who worked with SCORM and Learning Design: SCORM and Learning Design really amount to only the branching type of learning and not the other types of learning. I would not think of a Storm application that was a spreadsheet type of a game like Sim City or something like that.

Another example of this learning is what Jay Cross calls workflow learning or informal learning. This is learning where the learning and process of doing your job or doing your work happen at the same time and in the same place using the same device.

So, electronic performance support systems, for example, which are applications that actually provide learning inside the application that you’re working on. I’m sure you’re familiar with those.

Where we see this most of all interestingly is in games where, if a person is playing a game, the learning for the game has to be inside the game because the game player will not read the manual; absolutely won’t read the manual. There’s no point in giving them a manual. So, whatever learning the player is going to need during the playing of the game has to be inside the game.

Also related to this is the model of the community in practice that’s described by Etienne Wenger. Again, the community in practice, the idea of the community in practice, is that this community exists right inside the work place.

You might wonder about that. We did some work with municipal officials in northern Alberta and we asked them what their primary means of learning was on their job. Their primary means, if they needed to learn something, what they did was they picked up the telephone and they called somebody. They called a town manager in another town.

Basically what their learning model was was to have access to their community of practice right on their desktop. They picked up the phone and called somebody. That was learning for them. That model can be applied to the online world as well. It can be applied to the much more complex learning and working environments we find ourselves in today.

The environment, visualization technology in games: the helicopter that I so famously crashed is an example of learning inside an environment being able to provide visualization for me to understand what it feels like to fly in a helicopter.

Another aspect is mobile learning. One of the things that I’ve always said from the very beginning is that E-Learning is an online learning, learning not about being tied to a computer. They’re not about sitting at a desk looking at a computer screen.

Concordant with the idea of immersive learning is the idea of learning in the place where learning is most appropriate. A slogan I’ve used for many years is “the best place to learn about a forest is in a forest; the best place to learn about law is in a courtroom.” What these technologies now provide us is access to learning materials in the physical situation where learning is most needed.

One aspect of that, and that’s just as an aside, is again learning – I want to emphasize – learning isn’t simply the consumption of information. Learning is also the production of information. So, learning and multi-learning isn’t just about getting content like flash cards or drills or instant messages, it’s also about resource capture.

Here I am on the road – I have a microphone there. I’m recording this talk; somebody else is recording this talk (Hi, all of you out there in video land - I always like to put a personal message to people who are…) . It’s capturing the learning as well.

One of the major things that we see in E-Learning and especially E-Learning 2.0 is the idea that the process of learning is inherent with the process of capturing and creating new learning content.

Online learning, on the one hand, we all look at our own environments, we develop tools and systems intended to support traditional classroom learning: the learning management system, learning objects, SCORM, learning design, that whole infrastructure. I’m sure you’re all familiar with it.

On the other hand, what we should be doing, what we need to be doing and what we could be doing is developing tools and systems to support immersive learning, personal learning, dynamic learning - as I say they’re living systems.

The first iteration of this is user-produced media. As I say blogs and blogging is a very simple example, podcasting and vodcasting.

Again, Nate talked about Company Command yesterday. One of the major things that makes this site like Company Command work is that the members of that site are actually creating the learning materials. They’re drawing their own perspectives, their own experiences and they are contributing to that site.

There are many ways of producing content, and particularly with the new technologies we are able to more and more produce very good multi-media content. Not just text content, like blogs and blogging, although that’s very important.

I guess I should mention Twitter because people were talking about Twitter, which is very short textual content. But also podcasting, as I’m doing with my audio recording here. Vodcasting, which she’s doing with her video recording there. And game mods, game modifications and other multimedia are now very easily in the hands of people. People are able to create very simply complex multimedia.

I’m playing right now if you look at my web site; I’m playing right now with something called Kaltura. What Kaltura is is a system that allows me to create a video I’ve created either recorded off my own video – I have a little video camera in my computer - or I can upload some clips and grab them from You Tube or whatever. So, I create my video and I can add voiceovers, whatever to it. I put it on my web site and then the next person who comes along can edit my video.

And so, there’s a web site out there called Wiki Educator, which is using these Kaltura videos. What they’re doing is for the different subjects they’re having user generated, user created videos, educational videos, under different topics where people who come along after the video was created, add their own segments to the video. Amazing. Who would have thought that ten years ago? Oh, and of course, it’s all free. Of course.

So we have Web 2.0, the learning network. The idea of this place, this weird place, this virtual place, that is an intersection between education and work and home, that allows us access to easy-to-use tools supported by hosting services, like Kaltura or Flicker or YouTube, and allows us to create, to create types of learning.

For example, the E-Portfolio is blogged kind of learning. The idea here is that you’re learning is creating and presenting materials online.

I used to use the – still use it – slogan about “aggregate, remix, repurpose, feed-forward” as characteristics of the learning process. The aggregate is to bring in content, information from multiple sources. In other words, to reach out to all your connections online.

To remix is to bring different things from different sources together. To repurpose is to shape it to your own needs, to your own learning context, and then very importantly to feed-forward, to distribute this new material either as a video or a multimedia or a blog or whatever to other people in your network.

This learning is unorganized, it’s unmanaged. It doesn’t have an ‘outcome’ or a ‘purpose’. It is based on the flow, the communication, the content of the moment. It doesn’t have presenters and receivers.

It’s characterized by the “unconference”. I’m not sure if you’ve seen a whole lot about the unconference movement that has sprung up recently. The unconference movement is a bunch of people – like you, say – get together and there is no pre-defined agenda as to what will be talked about. Different people volunteer to talk about different things. They write what they’re going to talk about on a notice board at the side of the room, and people decide to create their agenda at the time they’re having their conference.

Sometimes people talk, sometimes people don’t talk. The content and the – well the entire structure of the conference is decided by the participants importantly not by votes or anything like that, but by each participant doing what they feel is most helpful to them at that point in time.

Now what typically happens is they move into different clusters, one cluster talking about one thing, another cluster talking about another thing. The idea here is that they are producing the best possible learning for themselves that they could have produced at that point in time with those people.

It’s messy. It’s messy, as opposed to neatly pre-defined structure. Look at this conference at this time (nothing personal to the organizers). Here you are. You’re all sitting here listening to me. Is this the absolute best use of your time that you could possibly have at this point in time? Be honest with yourself because you’re not going to say that out loud are you?

Voice: No! [Laughter]

The situation here has been pre-defined. There was a conference organizer who, for reasons unknown, picked me, brought me here through freezing rain [Laughter] and now here I am talking to you, right. And they had the best intent, and of course they made the best decision that they could given the resources that they had at the time and they may have been pretty good, but is it the best. Could you collectively have done a better job with the resources that you have?

The proposition by the unconference movement is “yes,” you working together to organize yourselves could come up with a better structure, better use of your time right now than a single or small group of people organizing this conference for you. That’s the proposition.

Again, it’s the idea of user-generated content, user-generated conferences, user-generated learning as a whole. Again it’s based on flow. I just mentioned Twitter in passing. Twitter is flow defined. Twitter is 140 character messages. You’re not going to have anything static and permanent with Twitter. You’re going to get messages to the effect, “what are you doing right now?”

The idea here is that learning is to immerse yourself in this flow. Not to try to capture this flow. Not to try to control or hold this flow, but rather to learn how to adapt oneself to this flow.

Douglas Rushkoff even talked about the internet being like surfing and learning on the internet as being like learning how to surf. A person who surfs doesn’t try to come to one single, concrete understanding of the wave because there is no such thing. A person who learns how to surf learns how to adapt him or herself to the wave. Drawing on all their experience, all their perceptions to make minute adjustments, to be able to react to the wave as it changes, as it forms.

So we have Web 2.0. That’s the concept and I just want to look briefly at some of the core technologies that underlie this. So, I ask how much time do we have. Oh, there we go.

Voice: Fifteen minutes.

Fifteen; good. See, I have a watch here, but it’s ten minutes fast. So I don’t want to trust myself. Time is – there’s McTaggart, , he wrote a paper called the ‘The Unreality of Time’, I believe him.

The first underlying tools of Web 2.0 technology are social networking tools, tools to create these learning networks. Social networking tools are very simply defined as tools that allow you to create lists of your friends, connect to them, and then use those lists to interact with them.

The earliest social networking tool, like instant messaging, ICQ, where you have a list of buddies, Skype you have a list of contacts. Now we have sites like Friendster and Orkut and Facebook and all of the rest where you very explicitly draw out a list of friends.

These social networking technologies, I’ll just point out, are very much in flux right now. The future of learning is not in Facebook, no matter what you’ve read. The future of learning is not going to be any social networking site, properly so-called, but rather something called – something that is much more distributed – Tim Berners-Lee called it and I quote, “the giant global graph” – GGG.

This is the idea that each person’s contribution to this web wide social network is a stand-alone thing, not connected to any particular web site like Friendster or Orkut, and in connecting to other people anywhere they are on the web rather than other people who are in Orkut or in Friendster or anything like that.

That’s what is being evolved, there’s a whole movement on social data portability, and as well (and it’ll come up a little bit later) OpenID, which makes that possible (I’ll mention that a little bit later).

Another underlying technology - and again this speaks to the messiness of the uncontrolled nature of Web 2.0 - is tagging. Tagging is a very simple phenomenon. A person presented with a resource like a photograph or a video or a web page or even an object, takes a word off the top of their head that they believe describes that object rather than selecting a word or a classification from a controlled vocabulary or a taxonomy. The idea of tagging is that people create their own vocabulary through use.

This is interesting because this creates organizations and structures of concepts that might not have been, perhaps could not have been, anticipated by taxonomists or librarians.

Another major technology underlying Web 2.0 is something called asynchronous Javascript and XML. The name sounds complex. It looks complex, but it’s actually very simple.

When you submit information to a web site, the old Web 1.0 way, I know you’ve all done this, the typical way you do it, you type information into a form. Fill out your name, address, blood type, other maiden name, bank account number, the rest. Then you hit submit and the page reloads.

What AJAX does is it allows you to enter information in the same way, but instead of reloading the entire page, a Javascript using something called HTTP_REQUEST sends a message to a web server. The web server sends a message back to the Javascript and then the Javascript updates just that little bit of the page that is relevant.

So, for example, if you’re logging in, here’s your page. You have a little log in form you will out your name, password, hit submit. Instead of the whole page reloading, the little java script sends a message, gets a result back and then just changes the little box where you log in.

This is important because it allows web pages to become interactive. It allows you to create a single web page that can have multiple interactions with the web server.

This allows for the creation of online applications, such as Google Documents, for example, or the Zoho suite of applications or things like Gliffy.com, which is a web page that allows you to draw diagrams, flow charts and things like that. And there are perhaps hundreds and hundreds of these applications in the web page (linking to Web 2.0 apps).

One of the things that we’re seeing is more and more people are using online applications like this instead of applications that would be loaded on their computer, like Microsoft Office Word document and PowerPoint slides.

One of the reasons for this is that it doesn’t matter what computer you’re on. You can always access your application and therefore you can always access your data. So if you’re at a cyber café in Kuala Lumpur and you logged on, you can go to Google Docs and work on your document just as though you were on your own home computer.

The other thing that’s important is, if the document is on a web site like that, you can work on that document at the same time that other people are working on that document. A classic example of course is the wiki. But right now I’m writing a paper with a friend of mine. We’re working on Google Docs. I actually - was just last night in fact - I was sitting there typing on the paper and he’s typing on the paper at the same time. We’re both working on the same document at the same time and it’s live. Brilliant; we love it. Got half the paper written like that. Unfortunately that was the easy half.

Another major aspect of Web 2.0 technologies is representational state transfer (REST), probably best described in contrast to something that you’ve probably seen a lot of buzz about, web services. Web services, which are supported by the simple object access protocol (SOAP) as diagramed at the bottom, are very structured, I consider them very top heavy. There’s a lot of infrastructure. I think that they’re slow. REST does the same sort of thing as web services, but it does it in a very lightweight sort of way. Data on the web and services on the web are accessed simply by sending a call to a web address, a URI.

The importance of this is that it gives online web sites, online services a very simple, low overhead way of rapidly sending data back and forth. This allows us to combine, to merge and combine data from multiple web sites.

That leads us to the concept of the mashup. The idea of the mashup is you’re working with data from one application and data from another application, mashing it together to create a new application.

One of my current favorites is working with Flickr, which is a place where I store my photographs, because I have nowhere nearly enough disc space on my computer to store my photographs, especially now that I’ve got an eight megapixel camera. So I upload my photos to Flicker and then I access – I just click on ‘organize my photos’ and then ‘map’. Map opens up a Google map, which Flicker has accessed from the Google site. Then I drag and drop my photo on to the Google map and it creates location data for my photo (note: in fact it’s a Yahoo! Map. –ed).

So this match up here is allowing me to use two separate applications on separate web sites, Google Maps and Flicker, in order to create data that I couldn’t have created otherwise. Very precise latitude and longitude locations for my photographs.

Now this is an aside: imagine that this is currently happening. Hundreds of thousands, maybe millions of people geo-tagging all of their Flickr photos. There are billions of Flickr photos. For any location on the planet you’ll have photographs that have been taken by dozens and dozens and dozens of people.

Another major aspect is JSON. The only thing I’ll say about JSON right now for those of you who worked on SCORM and sharable objects and faced the cross domain scripting problem, JSON solves the cross domain scripting problems.

JSON allows what is known as the ‘tag hack’. Basically what it is is a way of importing into the head or the body of an HTML document, structured data in the form of Javascript arrays that could be used by Javascript on the current page.

So basically I’ve created the data on one place on my own web server, imported to JSON and use it on another web server thus solving the scripting problem. Brilliant, I love it; it’s so simple.

Just as an aside again, JSON eventually is going to be a serious, serious challenger to XML.

Finally, OpenID, which I mentioned earlier: OpenID is a technology, a very simple technology that allows people to have a single identity to cross all of these different web sites.

You can see how having the same identity across all of these different web sites is going to leverage a lot of the interactivity, a lot of the mashups that the connections between these web sites enable.

This treats - and this is a bit of an aside - but this treats your identity as personal rather than institutional. I have no idea how that’s going to play out in your context. I think your context is a bit special.

But for learning institutions, like universities and colleges, this is going to have a big impact because, historically the person, the individual, the student, has always been defined by the institution. You have your university log in, your university number, your university ID. That’s who you are.

But as learning happens more and more as a consequence of an interaction between multiple web sites, your identity has to persist across those web sites, which means it cannot be defined by any one of those web sites.

In practice what that means is it takes the definition of an identity out of the hands of the institution and puts it into the hands of the individual. You define your own identity, which you then project into the different web services.

So it means basically - again, your context might be different for the rest of the world - An end to walled gardens, a way of sharing social networks and content networks across institutions, across web sites.

Now I just want to – and I have like what, one minute? I just want to talk very briefly about networks and I’m going to zip through a slides here and not deliver them, but what this has to do with is how to design these structures, these networks, so that they’re effective.

One of the things about networks is, just because it’s a network doesn’t mean it’s good. Just because it’s a network phenomenon doesn’t mean it’s good. Networks can have bad results as well. Sometimes they’re known as ‘cascade phenomena’.

If the disease spreads through a network of people, for example, that’s a bad thing. Well, unless you’re the disease, in which case it’s a good thing. But I mean, for people, it’s a bad thing.

So you need to set up your networks so that you’re resistant to this. You need to set up your networks so that connectivity is possible, so that you can create this web and interactivity, but also in such a way that you minimize the risk of cascade phenomena and other things that will inhibit the function of the network.

So – I’m just going to skip through why networks and skip through distinction between groups and networks, which is a way of approaching this - and talk about networks as ecosystems and think about what makes a successful ecosystem.

Out of this - again, this is just my tack on this, right? - I’ve come up with what I call the ‘semantic principle’. The semantic principle is the set of conditions that allows you to create reliable networks. There are four major principles to the semantic principle: diversity, autonomy, openness and interaction.

Diversity is the idea that the elements or the members of the networks represent the widest possible spectrum of points of view. You think about your brain, right? You don’t want all of your neurons doing the same thing all the time in your brain. That would be really silly. Your brain would cease to function. Your brain would be inert, like a lump of iron.

What you want are different neurons doing different things at different times. You want each neuron to have its own individual, unique set of connections, its own different perspective or point of view on the world. The idea of the network is you’re collecting these many different perspectives and combining them to create a new, overall view of reality.

Autonomy - and this is tied very closely to diversity - in order to get this diversity, each of these individuals in the network needs to work autonomously.

By ‘autonomously’ I don’t mean simply ‘making their own decisions’, but also making these decisions against their own background, against their own perspective, their own culture, their own world view.

So, we get a very genuine set of – how do I want to say this – very genuine set of distinct points of view or perspectives on any given state of affairs or entity.

Openness – we needs openness in order to ensure that all of these different perspectives are heard and that no perspective is omitted from this overall collection.

Openness allows all perspectives from all points of view to contribute to the network, to create what Rudolph carnap used to call the ‘requirement for total possible evidence’. The Carnap requirement was the requirement for the total evidence.

Then finally, connectivity. This is kind of a tricky principle, but the idea here is that the knowledge in the networks is not the propagation of some knowledge from one individual to another, but rather the knowledge that is creating the network is created by the interaction of the networks as a whole.

It is not simply a reflection of the knowledge contained in any particular individual. That’s a very tricky concept, especially when explained in less than five seconds.

But if you think about it you’re looking at a television set and you see a picture of Richard Nixon, what makes the picture of a picture of Richard Nixon has nothing to do with any individual pixel. There’s no pixel that’s a little tiny representation of Richard Nixon that is blown up big. Each pixel is just a little black and white dot. Richard Nixon exists in that set of pixels only as a result of all of those pixels working together.

Similarly another example, flying an airplane from London to Canada. No single individual can do this. No single individual can build the airplane, make the tires, pump the gas, navigate the airplane, take off the airplane, transport the airplane across the ocean, land the airplane without crashing. It’s too much work for any given individual to do. That knowledge is distributed across a set of individuals.

Those four principles, those four semantic principles, are intended to provide, if you will, a framework or a metric for the evaluation to design and select Web 2.0 and Key Learning 2.0 technologies. With that more or less on time, I conclude my thoughts and I thank you for your patience and your interest.
Read More
Posted in | No comments

Tuesday, 12 February 2008

Improving Socio-Economic Status

Posted on 09:42 by Unknown
This is a response to Ken DeRosa in D-Ed Reckoning, in which he argues, "Let's stop wasting time with these misguided schemes (to improve learning by alleviating poverty) and focus our efforts elsewhere."

As a foundation for public policy, the research in this post is surprisingly slim (and in places dubious) while the argumentation is not especially tight.

DeRosa asks, essentially, "can student achievement be improved by artificially increasing the child's SES?"

I have no idea what 'artifically' means in this context, since I have no idea what a 'natural' or 'real' SES would be for a given person. All SESs are artificial constructs: the exist as a result of the financial and other allocations provided to them by society.

The proposition being advanced in his post is, essentially, that improvements in a child's SES will not result in increased educational outcome. That is the only way he can conclude "It's all well and good to attempt to ameliorate the plight of the poor.... Just, don't expect that it's going to improve student achievement or improve real SES in the long run across generations."

In particular, DeRosa appears to be opposing a particular class of improvements of SES: "to increase the family income of poor families." The rest is left to "hope".

It is unreasonable to suppose that such a plan would work, and to my knowledge, anti-poverty advocates do not support such an approach. The effects of poverty are persistent. Throwing money at them after the fact and then saying that money fails to fix the problem is like steering after the Titanic has hit the iceberg and then saying steering makes no difference.

Poverty agencies all know that financial support is a necessary, but not a sufficient, condition for the alleviation of poverty. What conditions are also required will be discussed below. But it should be clear that the failure of money alone to solve the problems does not mean that the problems are not caused, at least in part, by a lack of money.

DeRosa continues, "At least that's the theory. We've been testing this theory for forty years now by providing massive injections of financial assistance to the poor. The gains in academic achievement, however, have proven to be elusive."

The use of the editorial 'we' here is misleading. In many jurisdictions, the gains have been impressive. In countries like Canada, Finland, Denmark, and others, something approaching economic equality has been achieved. And the educational consequences, to judge by the PISA test results, have been impressive.

The "massive injections of financial assistance" to the poor in the United states have obviously been insufficient. In this society, the additional measures appear not to have been undertaken. One wonders about housing standards, health care, and educational services.

It is also possible that the amount of money spent is, in fact, itself insufficient. DeRosa may argue that "this is a ridiculous argument," however, one would presume that there is a minimal cost to educating all children in a society, and if that cost is not met, then some children will not be educated. This argument is not ridiculous at all - it is merely not one that may be resolved easily or conclusively in a short discussion.

But all of these considerations aside: DeRosa's main argument is that differences in educational outcomes are due to genetic factors, and therefore money spent to change environmental factors is money wasted. At least, that's what the introduction of the Minnesota twin studies suggests. And thus he says, "The results have been consistent. About three quarters of the variance in IQ and student achievement is attributable to genetic factors. While the variance attributable to familial factors is about zero."

This result is subject to numerous criticisms:

First, it is not clear that IQ tests are a reliable measurement of educational outcomes. The tests are intended to identify innate, or native, intelligence, not actual learning achieved. Thus, it is not surprising that most of the findings would be explained by innate or native factors.

Second, the reasoning behind the attribution of genetics as cause is flawed. As summarized in the Wikipedia page we are linked to, "the similarities between twins are due to genes, not environment, since the differences between twins reared apart must be due totally to the environment." This is a non sequiter. There are numerous possible causes of similarity other than genetic factors.

Third, even the studies alluded to indicate that genetics play only a minority role. The Harris paper cited, for example, states that "Heritability generally accounts for 40% to 50% of the variance in personality characteristics." (p. 459)

Fourth, the examples provided aren't even looking for differences in learning outcomes. As the Wikipedia article summarizes, "Of interest to researchers are prevalence of psychopathology, substance abuse, divorce, leadership, and other traits and behaviors related to mental and physical health, relationships, and religiosity."

Fifth, the argument equivocates between types of influence. We began, above, talking about socio-economic status (SES). But these studies purport to show, show, as DeRosa states, "It was found that the contributions to the correlation between twins in g by... all dimensions of the Family Environmental Scale... were all zero to within two decimal places."

Of course, even that is a ridiculous conclusion, and DeRosa is quick to admit it: "Which is not to say that abusive parents and ghetto life aren't going to have a detrimental effect." Of course they are.

The more likely explanation for the experimental results in these very small and very localized studies is that the families were not different in any way that mattered. And in particular, none of the separated twins was raised as a malnourished poor black kid in the ghetto (for one thing, experimental ethics would have prohibited it, as it would have in such conditions constitute abuse of the experimental subject).

Sixth, the apparently 'genetic' differences very likely have other causes. The data cited from the Minnesota Transracial Adoption Study is purported to show that placing poor black kids into the homes of rich white people didn't change their educational outcomes (as misleadingly measured via IQ tests).

But what we know for certain is that none of these poor black kids was born in the white family. The child's entire prenatal history - including any possibility for malnutrition, cigarette smoking, drug abuse, pollution, and a variety of other environmental factors, may have played into the child's educational potential. For some kids, the ship hits the iceberg before they are even born. But this doesn't mean that the deficiencies are genetic. It just means that no amount of money after birth will alleviate the impact of poverty before birth.

DeRosa concludes "that low-SES does not cause or or significantly contribute to low student achievement. Further, student achievement will not be significantly improved by trying to artificially increase a child's SES."

This is simply not supported by the evidence he offers. At best, what he's shown is that some of the damage caused by poverty is permanent, and that other parts of the damage caused by poverty stem from the environment outside the home, and not conditions inside the home. But poverty advocates know that as well, which is why they generally resist 'blame-the-parent' programs.

He also concludes, "genetics is a large fly in the ointment that has a more significant effect on outcomes than the environmental factors in any event." This again is not shown.

Again - as I have argued before - these outcomes have complex causes, and these complex causes are often misrepresented in simple variable-effect surveys. Let me illustrate with one example.

A major component in my own education was access to classic works of literature as a child. But in order for me to benefit from these books, two concurrent things must take place: first, the books must be present in the home, and second, my cultural environment must favour reading.

Now suppose there are no books in the home, but the environment contains plenty of books - there's a school library, say. Then, given a cultural environment that favours reading, I get the same benefit.

Now from this it looks like it doesn't matter whether or not the parents have books in the home. And this is the outcome of the twin studies. But now suppose there are no books in the home and no books in the environment. Even if the culture supports reading, I cannot benefit. But I *would* have benefited had the resources been available. Unfortunately, this counterfactual is never measured.

Poverty creates ripple effects that bounce back and forth through a child's life. The child may have to deal with prenatal or infant malnourishment, substandard living conditions and health conditions, poor access to resources both in home and at school, community attitudes that enforce a norm by discouraging achievement, chronic health and social issues caused by pollution, crime, and other factors, systemic discrinination based on race, appearance, accent, and other factors.

DeRosa wraps up, "It's all well and good to attempt to ameliororate the plight of the poor. We do quite a bit already, perhaps too much."

The evidence doesn't bear that out. The evidence suggests that much more effort is made to find 'magic bullet' solutions - like small schools, phonics, charter schools, whatever - anything, everything EXCEPT to acknowledge the role of poverty in an unequal society in educational outcomes.

Instead of trying so hard to make a child's poverty a non-issue in their education, let me suggest a more productive, research-based, and enlightened strategy: feed them.

Then, maybe, if there's any money left over, given them food for thought - access to reading and learning material, tools to manipulate and create with, a space for them to be themselves, and an environment that values learning, creativity, and achievement.

Oh, that's not a magic pill solution either. It's still only part of the solution. The rest of the solution involves the broader initiatives we see in the countries that score well in the PISA evaluations - government interest and investment in education, public or affordable health care, broad equality of income (whether government mandated or won through union action), enforcement of housing and other health and cleanliness standards, and positive and accessible role models in media. To name a few.
Read More
Posted in | No comments

The Competition in Health Care

Posted on 06:58 by Unknown
Responding to Marginal Revolution, which writes, "Every year prices would fall in real terms, quality would improve, and coverage would be expanded. Imagine the whole health care sector working like laser eye surgery or cosmetic surgery."

The nice thing about laser eye surgery is that it is not a life-or-death thing. People can do without it, which means that the demand can more or less match the supply, as regulated by the pricing policies of laser eye surgery providers and the purchasing decisions of potential clients.

But this is not true of health care in general. For the most part, people cannot do without it (or, in some cases, doing without it means the likelihood of much larger health care needs in the future). So there isn't an abatement of demand as a result of pricing policies. This means that, all things being equal, if the supply of health care is even slightly less than the demand, nothing prevents a price increase into infinity, except the premature deaths of those unable to pay. Which is the current situation in the private health care system.

Most advocates of private health care take an attitude something akin to "it's OK if the poor people die off prematurely." Usually it is couched in more diplomatic language, but the sentiment is nonetheless there. And, indeed, it is unavoidable. You cannot have free market health care otherwise. Any attempt to mitigate this effect is a step toward public healthcare, and the question at that point resolves to one of how best to deliver health care services to the entire population, rather than one debating whether or not health care should be subject to the free market.

The proposition in the post above is essentially that improvements in health care technology will make health care accessible and affordable for everyone. Only in this way would it be possible for market forces to be able to balance the production of and demand for health care services. The suggestion that we should plan for such a time is well taken. But the suggestion that our system should be currently structured as though such a time were already here is not.

Marginal Revolution writes, "But if we institute a single-payer system, or highly regulated mandates, we will never have much chance of arriving in that world. Ever." This is simply false.

Historically, when government has mediated distribution because of market failures, such as shortages in supply that lead to infinite demand, such mediation has persisted only so long as the shortages have prevailed. Food rationing during the war years evaporated once supplies expanded after the conflict. Public housing in many areas has gradually given way to rental and owned accommodations. In Canada, telephone companies and energy companies, both owned by the government, have been privatized. The mechanisms exist, and so long as there are people working for pofit, there will be at least some movement toward privatization.

An thus with health care as well. In Canada, the movement is strong and well-funded (from U.S. sources). Even so, Canadians en masse vote against such measures because the supply of health care services is not yet sufficient. We would not be able to depend on being able to access and to afford health care in a private system, so we preserve our public system. We have the example of the United States, the richest nation in the world, with a private system that leaves 50 million people uninsured, and which sees people financially ruined by illnesses that would be nothing more than an inconvenience in Canada.

I, too, hop for the day that health care will be as common and as accessible as grocery shopping, where I can choose which store I go to, where I can expect government regulations to monitor the quality and safety of the offering, and the marketplace to moderate the price. But while oranges go for a couple of dollars a dozen, quality health care is rather more expensive, and rather less accessible.

That said, people who are proponents of private health care can take concrete and useful initiatives today, to hasten the day when costs approach clients' ability to pay. Instead of trying to force a marketplace solution into a market that cannot sustain it, advocates should be lobbying for and working toward policies that will significantly lower the cost, ensure th quality, and increase the affordability, of health care.

- a major form of government intervention in the health care marketplace, patent protections on drugs, has been one of the most significant drivers of increased costs in recent years. Drug company lobbies have successfully convinced governments to extend periods of patent protection, with a corresponding rise in the price of the drugs protected.

This system actually slows innovation, as improved drugs will not be rolled out until the protection period for other drugs expires. This is especially the case for high-end and specialist drugs, where there is very little competition.

Patent protection also slows the research effort as laboratories try to keep their processes secret in order to maintain an effective monopoly on research. Ironically so, since most of the research is funded directly by government, or indirectly through the participation of university labs and professors.

- mandate open access of all government-funded research. This would ensure that any research that is funded by the taxpayer is available to all agencies, thus maximizing the propagation of that research. Thus, the same work could benefit a large number of companies, rather than the one or two it does now.

This stipulation should apply to raw data as well (and perhaps more importantly). The sorts of discoveries Kepler made from Tycho Brahe's observations would be impossible in today's environment, because Kepler would not have had access to Brahe's observations.

- voluntary patient-owned electronic health care record - creating an effective system of one-patient one-record would enormously streaming health care and reporting processes. However, clients quite naturally trust neither governments nor corporations to preserve the confidentiality of such records (in large part because such records would later be used to deny health care insurance).

Thus the mechanisms prescribed in the 'Innovations in Health Information Technology' booklet are, for the most part, a step forward in a positive direction, and merit consideration by public and private health care providers alike. That said, if our experience in other technological domains is any guide, care must be taken to ensure operators are willing to adhere to common and compatible standards for electronic services; a health care record that "doesn't run on Linux", for example, is unacceptable.

I'm sure there are other measures that could be considered, and of course I have an open mind aout them. My own stance regarding health care is not motivated by partisan politics, but rather, the conviction that it is wrong to allow people to die prematurely merely because they are poor.

When I see a willingness on the part of those people supporting private health care to genuinely improve access, increase quality, and lower costs, I am supportive and willing to work alongside them. But when the point of their advocacy is merely to create an environment in which they and their friends can take advantage of a market failure to enrich themselves at the expense of people' health, they lose my support, and frankly, my respect.
Read More
Posted in | No comments

Saturday, 9 February 2008

How Memory Works

Posted on 08:43 by Unknown
This is a summarization of a paper by Eric R. Kandel on the molecular and synaptic basis for memory, Genes, synapses and memory storage. Kandel won the 2000 Nobel Prize for this work. I was moved to write this after listening to a segment of CBC's Ideas program discussing the nature of learning and memory. At the end of the paper, I draw the inferences from Kandel's work to my own.

The problem of memory has two major parts:
  • The systems component, which concerns "where in the brain memory is stored and how neural circuits work together to create, process, and recall memories. "
  • The molecular component which studies "the mechanisms whereby synapses change and information is stored"
The systems component - a history:
  • 1865 - Pierre-Paul Broca identifies speech production with a specific area of the brain.
  • 1876 - Carl Wernicke identifies language comprehension with a different area of the brain and suggests that complex behaviour requires the interaction of different brain areas.
  • 1929 - Efforts to localize memory fail; Karl Lashley formulates the Law of Mass Action: "the extent of a memory deficit is correlated with the size of a cortical lesion but not with the specific site of that lesion."
  • 1938 - Wilder Penfield localizes specific memories in epileptic patients (this was the subject of a 'Heritage Minute' video in Canada - "I smell toast burning").
  • 1957 - Scoville and Milner localize memory formation in the medial temporal lobe and show there are multiple, functionally specialized memory systems in the brain.
The idea that there are multiple memory systems in the brain has a long history in the philosophy of psychology:
  • early 1800s - French philosopher Maine de Biran argues memory can be subdivided into different systems for ideas, feelings and habits
  • early 1900s - William James divides memory into distinct temporal phases
  • 1913 - Henri Bergson distinguishes between conscious memory and habit
  • 1949 - Gilbert Ryle distinguishes between 'knowing that' and 'knowing how'
  • (1956 - Michael Polanyi - tacit knowledge (this isn't mentioned by Kandel))
  • 1969 - Jerome Bruner describes ‘knowing that’ as a memory with record and ‘knowing how’ as a memory without record
Scoville and Milner's studies of H.R., a patient who had the medial temporal lobe removed, yielded three major findings:
  • There was a short-term memory unaffected by the loss of other memory functions.
  • There was a long-term memory of events prior to the operation.
  • H.R. could form some long-term memories after, but denied doing so.
This established the distinctions postulated by the philosophers. This distinction between types of long-term memory is now characterized using the terms:
  • implicit - corresponding to 'knowing how', is habitual, unarticulated, and not recorded
  • explicit - corresponding to 'knowing that', is cognitive, artticulated and recorded

The molecular component

Kandel started by looking at the hippocampus but decided to focus on the simplest possible case, the marine snail Aplysia.

Why study Aplysia?
  • It is smart (for a snail) - it can create both short-term and long-term memories
  • It is simple - it has only 20,000 neural cells
  • Then neural cells are quite large, and hence easy to study
  • It is possible to map in detail the synaptic connections between cells with each other and with sensory and motor systems.
What they found (this is the key finding):
  • Short-term storage for implicit memory involves functional changes in the strength
    of pre-existing synaptic connections.
  • Long-term storage for implicit memory involves the synthesis of new protein and the
    growth of new connections.
This protein synthesis required to convert from short-term to long-term memory was developed early in evolution and hence preserved through all life forms, and is a general mechanism, responsible for both explicit and implicit memories.

Learning in pre-existing synaptic connections

Let's look at this in detail:

There are two major types of conditioning:
  • habituation - an animal perceives a sensation as innocuous and ignores it
  • sensitization - an animal perceives a sensation as noxious and tries to defend itself or flee
And two forms of learning:
  • non-associative - an animal habituates or sensitizes to a single stimulus
  • associative - an animal habituates or sensitizes to a pair of unrelated stimuli
In order to understand how the animal learns, therefore, "one needs in particular to work out the pathway whereby the sensory stimulus of the reflex leads to a behavioral response."

In the short term, habituation is represented by the weakening of the synaptic connection, and the resulting decrease in the release of glutamate, while sensitization is represented by the strengthening of the synaptic connection, and the corresponding increase in the release of glutamate.

Kandel doesn't include the diagram from Mann at right in the paper, but it nicely illustrates the process. The little blue ots represent the release of glutamate.

Kandel's description of this process (pp. 34-35) provides the chemical basis for Hebbian (associative) learning:

"Two events need to happen simultaneously: glutamate needs to bind to the postsynaptic nmda receptor, and the postsynaptic membrane needs to be depolarized substantially... This coincident activation of the nmda receptor and postsynaptic depolarization only occur when the weak siphon stimulus (cs) and the strong tail shock (us), are paired together."

Three major lessons are drawn from this work:
  • learning can lead to changes in the strength of connections (synaptic strength)
  • a single connection can participate in several types of learning
  • each of the three simple types of learning - habituation, sensitization and classic conditioning - gives rise to both short-term and long-term memory, depending on the number of repetitions
The growth of new connections

History of the distinction between short-term and long-term memory:
  • 1885 - Herman Ebbinghaus identifies two phases while learning nonsense syllables
  • 1941 - Zubin and Barrera, 1941 note the distinction in people hit in the head
  • 1960s - Louis Flexner and his colleagues identify a biochemical difference between them; long-term memory requires the synthesis of new protein during the consolidation phase
What's important is that there is a genetic basis for both the synthesis of the protein and for the consolidation phase.

Kandel notes, "Aplysia and Drosophila [a type of fruit fly] share some of the same genes and proteins for converting short- to long-term memory... creb has a role in learning in Drosophila that is similar or identical to its role in Aplysia, demonstrating striking evolutionary conservation. "

The mechanisms through which the proteins - CREB-1 and CREB-2 (aka ATF-2) - interact with the nucleus are complex and diagrammed (from Mann) at right.

In combination with other factors (such as, in the fruit fly, the the loss of a cell adhesion molecule), the interaction with the nucleus stimulates genes that results in the production of new synaptic connections.

Explicit memory storage

Explicit memory is more complex because:
  • it involves conscious participation in the memory recall
  • it doesn't depend on a simple stimulus; it usually depends on several sensory cues
Based on studies of mice, the hippocampus appears to play a major role in explicit memory. The hippocampus is basically a set of interconnected neural cell fields. It acts as a clearing-house for sensory input. Plasticity (the growth of new connections) has been discovered at all levels of the hippocampus. And the creb proteins appear once again to be implicated in the production of new connections.

Other work has demonstrated the plasticity of sensory systems. For example, experiments in kittens have demonstrated plasticity in the visual system. Cortical plasticity has also been demonstrated in adult monkeys. "These several studies suggest that long-term memory storage lead to anatomical changes in the mammalian and even the human brain much as it does in Aplysia."

A good example is the work done correlating the growth of synaptic connections and place memory. There are cells in the hippocampus, called pyramid cells, that are place cells - they fire when we occupy a certain place in our environment. So these cells form a cognitive 'map' of the environment. Various manipulations can lead to remapping, in which all the place cells change.

Consequences

a. Learning and Memory

At this point we reach the end of Kandel's paper. What are we to make of these discoveries? What lessons should we draw?

For me, it requires a clarification of a comment that I have made on several occasions recently: learning is not memory. Kandel does draw a distinction (p. 31): "Learning refers to the acquisition of new information about the world and memory refers to the retention of that information over time." But what does that mean?
  • Learning is a semantic process. It is about things. It has meaning.
  • Memory is a syntactic process. It is a set of mechanisms. It may or may not have meaning.
This is a difficult distinction because the two are so frequently found in the same location. Pyramid cells, for example, that contain a 'map' of the environment, are created through a process of remembering, as a result of the changes of synaptic connections in the hippocampus, but also represent (via the sensory impressions that cause those changes) distinct places in the environment.

Nonetheless, the two are not the same. It should be clear from this work that it is possible to create memories that have no semantic content. It should be clear that through manipulations of the physical process we can create meaningless memories.

This, in turn, tells us a lot about the reliability of synaptic networks, and hence, of networks in general. In reliable networks, the mechanisms that cause the creation of connections between neurons are meaning-preserving, that is, they represent memories, and not merely manipulations of the process.

(I am being areful about how I state this, because there will be different accounts of what constitutes 'meaning-preserving').

This suggests:
  • Approaches to testing that test for learning, and not merely memory: such testing will be individual-centric (like the environment maps in the hippocampus) and not standardized (which is more likely to reflect syntactic manipulations).
  • Approaches to teaching which are based on creating semantic connections with the world, through the production of meaningful experiences, rather than syntactic manipulations of memory, such as memorization and rote
But this needs to be studied further. What constitutes meaning-preservation? It is not (as I'll show below) truth-preservation. But what is it? How do we measure for meaning, and not just syntactic compliance? Can knowing how learn help us determine what we learn?

b. Practice and Reflection

Again, as noted previously, learning is the result of repeated experiences of the same (or similar) type; the neural connections required for long-term memory will not be created without this repetition.

Learning is therefore not simply the presentation of information to an individual. It is not simply the transfer of a fact from one person to another. At best, this process could create only a short-term memory. In order to activate the neural connections necessary we need to stimulate the production of creb proteins, which happens only through repetition.

Advertisers, of course, know this, which is why they repeat brand names, jingles and phone numbers over and over. Seasoned politicians also know this, which is why the best oratos employ catching phrases that will be repeated over and over, as in the video Yes We Can (maybe one of the best political advertisements ever).

As I have said before, learning is not content. Learning is something over and above the pressentation of semantically meaningful information to a person. To learn, one does not simply 'acquire' content, one grows. To learn is a physical act, not a merely mental act.

Again, though, we want to look at this more closely. For example, what constitutes a repetition?

For example: the need for repetition would seem to suggest that a lecture would be a poor form of teaching, since it does not produce repetition. But:
  • Can we style lectures such that the repetition is contained in the lecture?
  • Can people listening to lectures create repetition through the use of different modalities, such as taking notes, live-blogging or summarizing?
  • Can we create repetitions through the conduct of lecture-related tasks, such as projects or problem-solving based on the contents of lectures?
  • Does learning for ourselves stimulate the production of the repetitions required for memory?
  • Is there a connection between semantic content and repetition - does learning in authentic contexts increase the probability of remembering?
I would suggest that the answer to each of these questions is 'yes'. But they are the sorts of things that bear further investigation.

c. The nature of knowledge and inference

There is a persistent school of thought in both the philosophy of psychology and also in educational theory that suggests that cognition is based on logical and linguistic rules, that there is a logical syntax that governs learning and cognition.

Examples of this range from the postulation of Chomsky's deep grammar to Fodor's language of thought to Hempel's H-D model of the sciences. The proposition is essentially that meaning-preservation is tantamount to truth-preservation, where truth-preservation is as is well understood from logic and mathematics.

But what we learn here is that learning is associative, not propositional. That the mechanisms that govern this process are not expressions of truth-preservations, but are - at best - expressions of meaning preservation, where meaning has to do with sensory perceptions and states of affairs in the environment rather than abstract principles of logic and mathematics.

I have expressed this in the past as follows:

Our old understanding of logic, inference and discovery is based on universals:
– rules, such as rules of inference, or natural laws
– categories, such as classifications and taxonomies

Our new understanding, through, is based on patterns recognition:
– patterns, such as the activations of similar sets of neurons
– similarities, such as the perception of similar properties in nature

That is not to say that these universal principles play no role in our understanding. It means, rather, that we need to see them in a new light:
  • These principles represent 'convenient fictions', not underlying principles of nature
  • These principles are learned - they are not innate
There's a lot more work to be done here. The nature of inference based on patterns and similarities is poorly understood. It is one thing to say things like 'an understanding of learning based on simple causation is mistaken' and quite another to describe the complex mechanisms that actually occur.

We need to dig into the logic of similarity, following the work of people like Tversky and Varela, to conjoin this with our understandings of social network theory and graph theory.
Read More
Posted in | No comments

Tuesday, 5 February 2008

Meaning as Medium

Posted on 07:24 by Unknown
McLuhan's 'The medium is the message' has always been interpreted to mean discussion about the physical substrate. That allows people to talk about an electric light build as carrying a message. or to say things like 'the same content on television means something different than that content in a newspaper'. Etc.

But I thing there's another, more subtle, aspect to the slogan 'the medium is the message'. And that is this: that the 'meaning' of a message isn't the meaning of the words (say) contained in the message. That this content is the carrier for the message, which is (in a certain sense) subsymbolic. For example, when you say 'Get out of town' to a lawbreaker, you mean one thing, and when you say 'Get out of town' jokingly to a friend, you say something else. The 'message' - that is, the words 'Get out of town' - do not constitute the content of the message at all; the 'content' is actually the reaction produced in the receiver by the message (which is why an electric light bulb and a 300 page book can both be messages).

Now we can take this a step further (and this is what I think of 'the medium is the meaning'). The 'meaning' of the message, properly so-called, is constituted of the state of affairs described (referred to, represented by) the message. Thus, 'snow is white' means that snow is white. But this meaning is not the content of the message. You may be telling me that 'snow is white' but what you are actually saying depends on a wide range of factors - whether or not I had previously thought that snow was white, for example. On this view, again, you would think of the meaning as the carrier of the content..

But what is the message? It is a bit misleading to think of it as something that is actually 'carried'. Because, at best, it represents some intent on the part of the sender, and intent isn't something that can be tcarried in a message (it can be expressed in a message, but this is something very different). This is important because it breaks down the idea that there is some zone of shared meaning (or whatever it's called) between the two speakers. Even if there is a shared meaning, it's irrelevant, because the meaning is just the medium. It is simply the place where the interaction occurs. There is an interaction, but the interaction is not the transfer for some meaning. Rather, it is an attempt by a sender to express an intent - that is, to carry our some action (specifically, the action of causing (something like) a desired brain-state to occur in the listener).

The 'content', as McLuhan would say, is the receiver. More precisely, the content is the resulting brain state. The content is the change in belief, attitude, expression, etc., in the listener, that is a result of the transmitting of the message, the rest of the environment at the time, and the receiver's internal state. "What colour is the wall," asked the listener. You turn on the light bulb. "Ah, I see." he says.

This entire system is fraught with incompleteness and vagueness. The sender, for example, can only have a partial idea of the content he or she is actually sending with a message. There is the sender's intended content ('the wall colour is green') which - inescapably - becomes entwined with a host of associated and unassociated content when encoded into those words. Because the set of words 'the wall is green' is inevitably a crude abstraction of the actual mental state the sender wishes to reproduce in the listener. The encoding itself encodes, en passant, a raft of cultural and situational baggage. It exposes the sender as an English speaker, who uses the system of six primary colours, who is referring to a terrestrial object (otherwise, it would be the 'bulkhead'), etc. The tone of voice, handwriting, etc., can contain a multitude. And the like. The actual transmission can best be seen only as a scrap - the barest hint, which will allow the receiver to build a complex mental picture, one which presumably accords with the one the sender had hoped to create.

The received receives the sentence 'the wall is green' and decodes the 'meaning' of the sentence, which is a reference to a colour of a wall. This may or may not have been accompanied by some sensory experience or action (the turning on of a light bulb, say). These all, depending on all the other factors, cause a new mental state to emerge in the user's mind. It may even be accompanied by some internal perceptions (such as mental talking to oneself). The receiver may think, on hearing the sentence, "he thinks I'm stupid." It should be clear that the 'content' of the message, as received, may have little to do with the content of the message as sent. Moreover, the sender knows this. The sender may intentionally cause the receiver to receive the insult. The expression of the intent may be semantically unrelated to the intent itself (just as the swinging of a bat is semantically unrelated to the hitting of a home run - it is only when viewed from a particular perspective that one can conjoin the one as an expression of the intent to do the other).

This isn't unique, of course. J.L. Austin spoke of 'speech acts' decades ago. John Searle talks about 'indirect' or 'illocutionary' speech acts. Max Weber talks about 'sense' and 'intention'. Wittgenstein's doctrine that 'meaning is use' could be considered an 'action theory of language'. Habermas talks about language as the vehicle for social action.

And there may not be any specific intent (not even of externality) in the sender's mind. "He talks just to hear the sound of his own voice." A lot of communication is just verbal flatulence. It nonetheless has content, because it nonetheless has an effect on the listener (however minimal). The actual effect may have little, if anything, to do with the intended effect. Semantics is distinct from cause; the sender's intention does not have causal powers, only his or her actions do (and intention underdetermines action, and action under-expresses intention). That said, we are sensitive as listeners to this intention, and have a means (mirror neurons, for example) of perceiving it.

Language is the vehicle we use to extend ourselves into the world. It is what we use to express our intent, and hence to manifest our thoughts as external realities.
Read More
Posted in | No comments

Monday, 4 February 2008

What I Learned In High School

Posted on 15:59 by Unknown
Following on a thread from Clay Burrell and Harold Jarche...

I actually learned a great deal in high school. Most of it wasn't the approved curriculum.

English:

In grade ten my English teacher Jamie Bell - a young idealistic educator full of new ideas - had us all do writing journals. It could be anything we wanted and - as I've mentioned before - I filled mine with stories, crosswords, drawings and more. From that point on I kept writing for myself, the way that project taught me, filling numerous notebooks before finding a web space in which to express myself.

I was also taught public speaking in English class. Technically this began in grade five. But it continued throughout high school. I won the school championships in grades five, eight, nine, ten and eleven (it was a small school).

I discovered science fiction in high school (specifically, John Christopher's 'The White Mountains' and Arthur C. Clark's 'A Fall of Moondust') and read that during English class instead of the official texts. I was also reading the classics (from a series that my mother bought) - Twain, Crane, Stevenson, Swift, Weiss, London and many more. These gave me a vista far more sweeping than the school texts, and let me see myself as (potentially) a hero.

In grade 12 I was supposed to read Dickens (we were finally done with years of Shakespeare) which I hated. I was tested on content (what colour was so-and-so's shirt) which I thought was degrading, so I boycotted then. I did take the final exam, though, and so managed to finish the year with a respectable (but still failing) 44 percent.

French


I took 12 years of French and found I was not qualified to work for the government after I graduated because I was still unilingual.

Social Studies:

I enjoyed World Politics and I supposed I learned the basics of political systems. I enjoyed the model stuff that we did:

- model parliament (I managed to win 30 percent of the vote in a school-wide election running as the leader - and only member - of the Fascism Reform Party. This made me leader of the opposition in a minority government. With the socialists I toppled the government and then made their party illegal. Now possessing a clear majority in parliament, I made the socialists illegal as well, thus becoming the only member of parliament. The governor general intervened so I shot him (no I didn't - I had a screaming fit and swore at Mr. Greenfield and his "bloody class" - I was very passionate when I was 16 ))

- model Premiers' conference (naturally, I was the prime minister - I wrote to the government for advice and got pages and pages of policy papers and procedural notes, which I basically committed to memory - something that has served me well in chairing meetings ever since)

- model commonwealth conference (subbing for Jane Cooper as the representative from 'England' - and therefore having to crib overnight to prepapre) city-wide, in which I learned that the high class kids from Ashbury may be dressed to the nines, but they weren't any smarter than I was)

- model revolution - definitely an unsanctioned action, the 'Movement for Autocratic Organization' overthrew social sciences - I learned that it's easy to plan a revolution and to write a manifesto, but the reality is very very difficult to pull off - because you have to win the support of the people (aka the students) which is not such a simple matter.

Drama

I tried out and lost a male part to a girl. I learned some play. I learned I liked Randy-Lee Gbert (not a typo, it was a very odd last name). Nothing ever came out of either thing though.

Mathematics

Pretty much nothing. By the time I hit high school I was competent with basic mathematics. I could do geometry and measure areas and stuff like that. I zoned out right around the day they decided to teach me quadratic equations (which, somehow, I knew was specialized knowledge).

Science

I'm not sure when I learned the basic laws of motion, friction, force, acceleration, and all that, but I learned them.

In my closed ecosystem project I learned that nature needs sunlight. I also learned that going way way overboard on a project (I kept my ecosystem long after the rest of the class gave up, and then submitted a detailed notebook with graphs of months of measurements, drawings, references, theories, the rest) is sometimes rewarded (with a 20/10 woo hoo!).

Um, what else? I never did made anything explode (not for lack of trying). I learned I didn't want to cut animals open (and hence skipped most of the biology classes). I learned that the experimental; method was a fraud (because we were doing 'experiments' but were penalized if we didn't get the 'right' results - which, of course, is a contradiction).

Geography

Oh, I loved geography.

I learned the shapes of every country in the world, their location, their capital cities, their flags, their forms of government, their populations (roughly), their major exports, and some of their history. Mind you, I learned this from reading world almanacs, but I digress...

I learned everything about Ecuador.

I did a major project on the Danube River (so it was a special thrill to finally see it when I went to Vienna, even if it has been rerouted far away from the city center). I wrote to the embassy of every government along the river (I picked the river because it had lots of governments) asking for information. They all responded - the communist countries right away, Austria next, and West Germany dead last and well after the end of the school year.

I did a project about the northwest territories and learned all the islands.

I learned urban geography which led to a lifetime habit of creating complex city maps in the margins (and sometimes on whole pages) of my notebooks. I remain to this day an inveterate critic of transit systems, highway intersections, left turn lanes, parkland planning, city profiles, and more. I was seriously tempted in later life by a career in geography (I didn't have the eyes for cartography, sadly, because it really was my first passion - I still love maps of all kinds).

I probably learned more, but that's the main stuff.

History

I learned all the explorers (specifically: the Vikings, Columbus, Cabot, Tasman, Hudson, Magellin, Drake, Frobisher, Mackenzie, Franklin, Thompson, Livingston, Stanley) and where they went (I made maps, of course).

I also learned enough ancient history in Ancient History class (it was an experimental class with only eight people - I sat next to Janet McGee (the girl who took my role in drama) and she liked me. I learned just enough about the Greeks and the Romans to become fascinated by them (my actual knowledge of the Greeks and Romans is based much later readings of Herodotus and Gibbon).

Economics

I took grade 11 and grade 12 economics in grade 11, discovered I was very good at it, and lost interest.

Art

I was pretty good at art, but never really received any instruction or technique. So, to this day, my only real artistic ability is to copy. I like taking photographs, though, and with my father set up a black and white photo lab at home.

Industrial Arts

I learned I was very good at drafting. I learned to print (that is, to print properly, with proper form, beautiful writing), which became my 'handwriting' thereafter (including even my signature, which is today (sadly) a bad scrawl - maybe time to fix that). I learned basic drafting techniques, including the three-sides diagrams and the exploding diagrams.

I learned how to weld. I could probably still run a bead - I really liked that. I learned basic carpentry (which was, essentially, how to use power tools without injuring myself).

Phys Ed

I hated phys ed.

I learned I cannot kick a 30 yard field goal, even if my grade depends on it.

I learned I was the 4th fastest kid in school (over distances greater than a mile). I learned that this does not translate into a starting spot on the soccer team. Apparently you need to be able to kick.

I learned the rules of curling and how to curl. As a skip, I am undefeated (1-0, lifetime).

I learned the proper way to hold a golf club. Sadly, this does not translate into 200 yard drives. It does, however, help a lot with putting.

I learned that if you show even the lightest weakness, people will exploit it; that if your clothes are even slightly ripped, people will rip them to shreds; that if you feel pain, people will inflict it.


25 years Later

I went to my high school reunion. My presence at the high school had been completely obliterated. there was no trace.
Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • Blogs in Education
    Submission for a forthcoming STRIDE handbook for The Indira Gandhi National Open University (IGNOU). See related handbooks here . What is a ...
  • Learning and Performance Support Systems
    This post is to introduce you to our Learning and Performance Support Systems program, a new $19 million 5-year initiative at the National R...
  • E-Learning: Générations
    ( English version ) Ces dernières années, j'ai travaillé sur deux grands concepts: d'abord, la théorie de l'apprentissage ...
  • E-Learning Generations
    ( version française ) In recent years I have been working on two major concepts: first, the connectivist theory of online learning, wh...
  • Open Educational Resources: A Definition
    The Definition Open educational resources are materials used to support education that may be freely accessed, reused, modified and shared b...
  • McLuhan - Understanding Media - Summary of Chapters 11-14
    My contribution to the Understanding Media Reading Group Chapter 11 McLuhan writes, in Chapter 11 of Understanding Media, that "The mys...
  • TTI Vanguard Conference Notes - 4
    Erin McKean, Wordnik The language is the Dictionary If you took the language, and you got rid of the dictionary, what would be left would be...
  • Progressive Taxation and Prosperity
    Responding to Justin Fox, editorial director of the Harvard Business Review Group, How big should a government be? in the Harvard Business ...
  • Bob Dylan in Moncton
  • International MOOCs Past and Present
    OpenLearning.com , a venture born out of the University of New South Wales ( UNSW ) in Sydney, Australia. Starting this week, you can begin ...

Categories

  • #change11
  • Connectivism
  • http://www.blogger.com/img/gl.link.gif
  • Shakespeare

Blog Archive

  • ►  2013 (68)
    • ►  December (1)
    • ►  November (5)
    • ►  October (6)
    • ►  September (7)
    • ►  July (3)
    • ►  June (5)
    • ►  May (6)
    • ►  April (18)
    • ►  March (8)
    • ►  February (2)
    • ►  January (7)
  • ►  2012 (56)
    • ►  December (3)
    • ►  November (7)
    • ►  October (7)
    • ►  September (7)
    • ►  August (2)
    • ►  July (2)
    • ►  June (3)
    • ►  May (1)
    • ►  April (5)
    • ►  March (6)
    • ►  February (6)
    • ►  January (7)
  • ►  2011 (86)
    • ►  December (7)
    • ►  November (11)
    • ►  October (8)
    • ►  September (6)
    • ►  August (1)
    • ►  July (8)
    • ►  June (7)
    • ►  May (10)
    • ►  April (2)
    • ►  March (4)
    • ►  February (11)
    • ►  January (11)
  • ►  2010 (108)
    • ►  December (9)
    • ►  November (9)
    • ►  October (12)
    • ►  September (4)
    • ►  August (6)
    • ►  July (10)
    • ►  June (9)
    • ►  May (9)
    • ►  April (9)
    • ►  March (12)
    • ►  February (9)
    • ►  January (10)
  • ►  2009 (85)
    • ►  December (3)
    • ►  October (8)
    • ►  September (7)
    • ►  August (4)
    • ►  July (15)
    • ►  June (5)
    • ►  May (7)
    • ►  April (6)
    • ►  March (17)
    • ►  February (7)
    • ►  January (6)
  • ▼  2008 (94)
    • ►  December (5)
    • ►  November (7)
    • ►  October (7)
    • ►  September (6)
    • ►  August (16)
    • ►  July (11)
    • ►  June (6)
    • ►  May (6)
    • ►  April (5)
    • ►  March (4)
    • ▼  February (7)
      • The Reality of Virtual Learning
      • Improving Socio-Economic Status
      • The Competition in Health Care
      • How Memory Works
      • Meaning as Medium
      • What I Learned In High School
      • Let the Market Decide
    • ►  January (14)
  • ►  2007 (3)
    • ►  December (3)
Powered by Blogger.

About Me

Unknown
View my complete profile