Hal Fan Hour

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Wednesday, 29 July 2009

No Apology Needed

Posted on 14:56 by Unknown
The big news today (though it is conspicuously completely absent from the pages of the Times & Transcript) is the Saint John Telegraph-Journal's apology to Prime Minister Stephen Harper.

The Telegraph-Journal reported that Harper palmed a communion wafer when he received it at Romeo LeBlanc's funeral, instead of eating it right away, the way he is supposed to. Yesterday, they are pleading mea culpa and have suspended or fired the publisher and editor.

What really gets me is that every news outlet - every news outlet - is reporting this as a case in which the newspaper was in error and is finally fessing up. I even heard someone from Columbia Journalism Review on CBC this afternoon talking about the case as though it were simply a case of the newspaper (or its editors, who apparently inserted the fact into the story (and then made it the lede, and then wrote the end-of-the-world headline)) being in error.

Look at this fawning coverage on CTV News, for example:



The video of the incident, which played quite a bit when it happened three weeks ago, is nowhere to be seen on the television today. There's a good reason for that. The video shows the Prime Minister palming the wafer.

Judge for yourself.



With all due respect to the Orwellian inclinations of the Canadian media: you can't yet make me disbelieve the evidence of my own senses. Not even if all of you march in lockstep with a Prime Minister's Office (PMO) who's first instinct about the incident was to lie about it.

I have no doubt mysterious 'copyright claims' will be made and that the video will soon disappear from YouTube - that's the new censorship these days. The indiscretions of the rich and famous are simply 'disappeared' in a wave of copyright claims. So I've saved a copy of the video, probably illedgally, so the evidence will remain extant.

I agree completely with Mark Federman: "Given that the Irving family owns lumber, ship building, oil and liquid natural gas refining, TV stations, and newspapers - most of which can be given quite a hard time by the federal government of the day - the fact that this apology and resignation came in response to a clearly embarrassing faux pas by the Prime Minister is perhaps a bigger scandal than the original scandal.

The apparently unrelated stories in the news - the cancelled oil refinery, the sudden need for approval for Irving's headquarters - now take on a new light. Obviously there's a tug of war between the Irving-owned newspaper - which not so long ago fired a cub reporter for criticizing the provincial Liberal government - and the Harper conservatives, one the Prime Minister appears to have won.

Again - use your own judgement - as the Prime Minister turns away, where is the wafer - in his stomach? Or in his pocket.




(For the record - I don't care whether or not he ate the wafer - even if it was a protocol breach, it was very minor and comnpletely forgivable - what I do care about is, first of all, the immediate reaction of his office, which was to deny the story, and now today, the apparent pressure exerted on the media to collectively deny the evidence of our senses.)
Read More
Posted in | No comments

Your Pension Awaits...

Posted on 14:32 by Unknown
Sure, saving for your pension is important. Or, it would be, if you could be sure of having that money when the time came to retire. I never believed it.I used to complain about pension plans - especially mandated pension plans - because, I would say, "they'll just steal it from me."

People said I was being paranoid. Others would just laugh at me, as though it were a joke.

I had an RRSP - it was one of those manditory locked-in RRSPs, that I couldn't cash. So I had it in mutual funds. The bank waited until the market crashed, then (without giving me any option) converted it to cash. That was the TD Bank. My total savings from all my work prior to 2001 is now sitting as $10,000 cash. That I still can't touch.

Of course, I, at least, have something. Even those who have managed to avoid the shysters and the banks and the crooks are still in trouble. Consider CanWest employees. "It’s not clear whether that’s because of the market and/or low interest rates or because Canwest hasn’t been meeting its pension payments. No matter the cause, the retirees have been told that Canwest has no plans to fund the deficiency."

The short version: bye bye pension.

There's going to be a lot more of this, because what has happened, by and large, is that companies collected this pension money, and then spent it. Oh, they didn't say they spent it. It doesn't show up on the books as having been spent. But when it comes time to0 actually pay it, mysteriously, it won't be there. The company will reorganize, or be sold, or go bankrupt, and it will disappear.

People currently saving for their pensions, or in other registered investment savings plans, such as pre-paid tuition, or other education savings plans, or life and other forms of insurance, and all the rest, will likewise find that their money has disappeared. It's going to get worse before it gets better. And it will get very bad once the government has stopped bailing out its friends, and declares that there's no money available for income support, for health care, for education, and the rest. After promising that we could invest our way into income security, they will throw us to the wolves.

Don't plan on retiring, even if it is only a few years away. Take these last few years you have of something like secure employment and develop some marketable skills. Learn programming. Learn carpentry. Auto repair. Something.
Read More
Posted in | No comments

Tuesday, 28 July 2009

Correction

Posted on 07:55 by Unknown
Read this:

Correction: July 22, 2009
An appraisal on Saturday about Walter Cronkite’s career included a number of errors. In some copies, it misstated the date that the Rev. Dr. Martin Luther King Jr. was killed and referred incorrectly to Mr. Cronkite’s coverage of D-Day. Dr. King was killed on April 4, 1968, not April 30. Mr. Cronkite covered the D-Day landing from a warplane; he did not storm the beaches. In addition, Neil Armstrong set foot on the moon on July 20, 1969, not July 26. “The CBS Evening News” overtook “The Huntley-Brinkley Report” on NBC in the ratings during the 1967-68 television season, not after Chet Huntley retired in 1970. A communications satellite used to relay correspondents’ reports from around the world was Telstar, not Telestar. Howard K. Smith was not one of the CBS correspondents Mr. Cronkite would turn to for reports from the field after he became anchor of “The CBS Evening News” in 1962; he left CBS before Mr. Cronkite was the anchor. Because of an editing error, the appraisal also misstated the name of the news agency for which Mr. Cronkite was Moscow bureau chief after World War II. At that time it was United Press, not United Press International.


So where is this from? Wikipedia? Some guy's blog? Bad student essays?

No: The New York Times

Can we please stop talking about the 'authority' and 'reliability' of traditional media the editing process, the review process, etc?


Read More
Posted in | No comments

Saturday, 18 July 2009

The DNC Kindle Plan

Posted on 10:27 by Unknown
Not that it needs to be said, but...

Responding to Democratic Group’s Proposal: Give Each Student a Kindle

The idea is a bad idea, not because paper texts are less expensive or any great shakes – they’re not – but because the Kindle is bad overpriced and inefficient technology.

Providing students with netbooks (or having them buy their own, for those of you who think any government expense is communism) will provide free access to the world’s literature without Kindle’s proprietary technology, invasive content management, and high costs.

When textbooks – especially at higher education levels – can cost $100, the savings of a $250 netbook become apparent – but only if you're not paying $99 for the electronic version of the textbook. Electronic media works only if costs for digital materials are substantially less than paper materials.

And they can be. Indeed, the cost for most digital materials is tending toward zero. Only when a distributor can lock you into a proprietary platform does the cost remain high. Open access materials – everything from Project Gutenberg to Wikipedia to Media Awareness Network – will deliver the savings Kindle cannot.

Read More
Posted in | No comments

Friday, 17 July 2009

IMS Curriculum Standards Workshop

Posted on 10:37 by Unknown
More blog summary from IMS-2009 meetings in Montreal


Overview of Common Cartridge
Kevin Riley

(same talk as the previous two days)


Achievement Standards Network

Diny Golder (presenting)
Stuart Sutton

http://www.achievementstandards.org

An achievement standard "is an assertion by someone in authority..."

The domains we want to cross are geographc domains, grade-level domains, and work domains. So when we use the term it could be curriculum standards, content standards, etc. ASN is a data model for these assertions.

Specifically, the goal when we created the ASM, it came out of a lot of research, dealing with the domain of cyber-learning, with the ways one learns in a digital world. The goals look forward, not just the existing needs of the standards bodies, but also to enable a global distribution model, so anyone can play in this field. This is very similar to the way we live in the world of paper libraries, and what a catalogue record is about. ASN standard representations that are licensed under Creative Commons.

The global interoperability - standards data can be referenced with URIs.

Australia: resource integration through achievements standards. Being used for Australia and also countries they world in. In Australia, they are developing a curriculum, but also a national standards system. Parts include personal learning, reporting, student portfolio, etc.

The idea of sharing and collecting resources to teach is not a new idea, but it's fairly new in the environment of traditional textbook publishers. Example: Thinkfinity, from National Geographic, Smithsonian, etc. They have correlated resources using ASN, and Thinkfinity pulls them together, Similarly, TeachEngineering correlated resources to standards for all 50 states. Also, WGBH does education programming for PBS, and these are coorelated to standards for the 50 states.

The Michigen eLibrary page is an example of different views of the standards; this (demo) is their cataloguing tool. They get resources from hundreds of courses, and they do not include correlation natively, but a cataloguer correlates it. The cataloguer correlates to a level. The user can browse via the standards and choose a subject, and they have an indicator that a resource has been correlated to that 'statement' at that level (a 'statement' is a competency assertion, generally found within a state standard).

(Yvonne from D2L presenting)

At D2L we have integrated this into the system. Basically we have taken the ASN compencies list, converted it, and integrated it into the tool. Hence, we have learning activities tied to the curriculum standards. So teachers and designers can tie their materials to competencies in the ASN.

In our learning repository tool, we embed the taxonomy information into the metadata. The publishing of the material will allow the material to be aligned to the stadnards. You can browse through the repository by competency and grab content specifically correlated to that learning objective. Or if you look at content, the system will return any compency associated with that content.

The ePortfolios tool - it's really important to be able to look at the different levels of competence and how students will achieve those. You can take those competences and publish them to the ePortfolio. You can then look at that and see which of the competencies you've accomplished. And maybe add more during the course of the year.

Competencies can be added to eportfolio presentations, tagged, added to collections, shared, reflected upon, etc. (demo - competency reflection).

(Diny presenting)

Click & Learn - another example. They feel one of the things they offer to subscribers is an easy way to browse the collection by the standards. They have web services that go out and pull the data live from our databases. They are indifferent as to whether the definition of the standard has changed. None of this is stored locally. (Standards change only in one way: they get new ones. Old data never goes away. Resources remain related to statements.)

(Stuart presenting)

Looking at ISO-MLR compared to ASN. We use the Dub,in cor for the well-defined abstract model. We use globally unique identiiers that are dereferencable over the web - URIs. Every node in the standards document has its own URI. When it's resolved, what is returned is the taxon path all the way up to the standards document.

When we talk of standards, what we normally think of as text blocks. But behind that, every statement has a whole set of metadata that systems find very useful. That scema is Dublin Core, and it is extensible (we found Australia needed to do this). So there is the Australian application profile of the ASN, the US application profile, and we expect many more.

The ASN model basically starts with an achievement document representation. We do not duplicate the standard, we create a representation of the standard. So we create an achievement document representation that in RDF consists of a set of nodes and arcs. There are two primary entities, the standards document itself, and the nodes, which are individual assertions (statements) and arcs that define relations across the nodes (nothing says it has to be hierarchies) - this allows new relations to be extablished between the nodes as they emerge in practice.

text needs to be there to satisfy umans, but they are inherently ambiguous. But we always identify text with URIs, because just because text strings match, they may be in different hierarchies, have different meanings, so they have to have their own URIs. Behind the text is a rich set of descriptive metadata, of which the text is one piece. All the rest of these properties (about 57 elements) apply to it. Eg. education level (in Bloom's taxonomy), jurisdiction (where it's from), etc.

(Diny presenting)

This metadata is extremely useful for a publisher who is creating a correlating point for a resource. These elements, when the standards statement has rich metadata, will help you make those correlation points to those resources that have rich metadata. If it doesn't fit this sort of correlation, you can exclude it from consideration. But if you have, say, a history, and you are on a timeline, you can use the spatial and temporal aspects to apply it to the timeline.

(Stuart presenting)

The metadata is created by the people who are creating the descriptions of the standards. It could be us, or it could be the standard authors - the Australians are doing it themselves.

(Comment: if you have tools that depend on there being alignments, thy have to be there - you can't depend on them and then have them not filled out)

Reply: we follow the general DC principle of 'optional'. It's up to application profiles to determine within the domain. For example, the Australians depend on what is required or not.

(Diny)

The list have come out of a lot of bodies that have come out of a lot of organizations coming out with 'national exemplary statements'

(Stuart)

When a taxon path is returned, all the metadata comes with i, not just the text.

3rd Party Derived Statements

Our goal in the last five years has been to support as many environment as we can. The 3rd party derived statements tend to be where the statement in the standard is not granular enough, and the publisher needs it to be more granular, and such 3rd parties can create derived statements and lock it into the ASN model. That doesn't mean that's the wisest thing to do, just that it can be done. Just consider this a framing of the issue - that we can support third party assertions.

For example, here you see a very simpl taxon path (image). It's a simply hierarchy, each level is an entity, and behind each entity is a metadata description. This is from Ohio, it's fairly shallow, only three levels deep, and if you get down to this node (bottom) you will see that there are some 60 competencies here, and you may want to tst to only part of it. You may say, 'my testing corresponds to this subset of this' and this subset is a derived statement.

Derived statements are 3rd party statements that *refine* the original statement. They restrict it in some way. As long as it's a legitimate refinement, the datamodel will handle it. So the derived statement will reference back to the canonical node it was derived from.

Example: derived statement with a URI pointing to its own domain, say, test.com. When you hit one of these, you have these options:
- you can discard it ("I don't speak test.com" - not recommended)
- you can use the correlation from a trusted source
- you can generalize the correlation (to "dumb it down") which is the same as discarding the correlation, but you apply the correlation to higher in the taxon path (this is what you would do if you don't trust the source)

The point is that the model supports 3rd party assertions, and they can be locked into the canonical structure, and they have meaning and context. What you do with them, though, is completely up to you.

These are useful when you have a statement that s so complicated no resource in the world could do all these things, so you infer from this to ten different derived statements.

(Diny)

We has distributed a couple of research papers, one o which adresses the question of 'strength of fit', eg., "the statement is broader than the resource", "the resource is broader than the statement", so we're exploring this.

(Stuart)

So you can have a 'strength of fit' threshold.

(Comment)

If we were to standardize, from a CC perspective, to what level would we use the taxonomy, the canonical, or to the derived?

(Stuart)

That's the question before us, probably the major question, about what you do in Common Cartridge.

Here (slide) is a set of derived statements from Pearson. They result from dropping 'parentheticals' (actually, limiting clauses of the statement) or splitting lists.

Tools and Services

Current tools and services that we (ASN) offer:
- batch downloads of standards, which are freely available
- mechanisms to dereference an ASN URI (to get the taxon path, no logins, nothing)
- web services (APIs) that interact with metadata generation tools (no API key required)
- searching and bowsing interface within ASN

(service of slides dhowing the services)

(Diny)

We encourage other parties to create rich services aroun d it; we are research organization and will not be devloping those services.

(Stuart)

Within the U.S., NSF funds the gathering of all current and histoical standards (761 of them), which have decomposed (atomized) into "assertions" (RDF triples).

- break -

Standards Meta-Tagging Within the K12 Common Cartridge - Issues and Options

Mark Doherty (Academic Benchmarks) presenting


The goal here is to look at the issues and options involved in standards and metatagging (our clients and us at Academic benchmarks).

Academic benchmarks provids th numbering system for the K-12 standards metadata for the content providers. Headquartered in Cincinatti. Founded in late 2003 and was focused on B2B provision. In summer of 2009 launched an http://academicbenchmarks.org site to extend outreach to support teachers and educators abnd researchers, and published the entire document collection. Also some reports and surveys. The .org is free for all.

In the database there are 1.7 million records that reflect state, local or international standards. These are updated constantly (like painting the Golden Gate). AB GUID network with 175 clients in the numbering system. Also new clients from the open curriculum initiatives: currciuculum pathway, curricki, etc.

We started off with some tenets that will drive forward. We have a common and complimentary challenge with IMS. There is a weakness in the K-12 education market, that is stopping innovation and costing money. That weakness is a lack of a common method to communicate content and a growing set of metadata, There is a lack of clarity on the roles each group can play at various levels for ultimate success. There needs to be an element of practicality, of flexibility, or pragmatism. That is the approach we have taken to the marketplace.

Formula: technical model + business model + adoption = successful model. I think each of us in this room operate their businesses on this formula.

Technical Model

The challenge here is to serve districts with different tools, products and systems, each with a different technical approach and varying functional components. There is a need for flexibility. The AB response is the AB number, the AB GUID (the number associated with the academic standard), which is delivered to customers. That provides uniqueness for the standard. The GUID is the absolute center of what we do. We don't concern outrselves with the format - that's just the delivery mechanism - the number is key. The format may be AB XML, SIF XML, CSV, XLS, or some custom format. Common Cartridge K-12 is a pending format.

(someone else (Kelly?) presenting, very fuzzy, can't be understood)

The context is more around the challenge that we see the marketplace has, and the practical part is that it has to address some sort of mechanism that does exist now.

Technical Practicality

Not all K-12 districts or states will use a Harcourt (or McGraw Hill) platform for the whole time. They will be swapped in, swapped out, and multiple providers will be integrated at various oints in this. That is opne of the driving points of Common Cartridge. So we are driving at the idea that there needs to be a unique numbering system that all providers can use, so all can share the same metadata, without any loss of integrity.

We actually have response systems based on the platform (...?) They shop for that based on the content provider (... ?) (these sentences are literally gibberish, sorry)

Business Model - Operational Practicality

We all understand the difficulty in this. Whatever the authority is for the standard, they are the creator of the standard, but the issue we see is they are not using their authority to solve a critical piece is diminishing here, and that authority has the oppositunity to make great efficiency, if the standard is actually implemented (this is direct and literal, his sentences are this disjointed).

Our response here is that we collect the standards as published by the state, and add value to them by converting to an actionable state, add the number (the GUID) to them, and distribute them to our clients. What we have seen as an example of this is, we're doing this because the states won't. (sorry, that was a literal sentence, if nonsense). We're doing this, in effect, for the states, on behalf of the states.

Funding Mechanisms

In IMS, the folks in the room, are members, and in the same context, are clients, and the government or the branch or whatnot the market in every brach there is a question of how it is going to be funded. (gak)

How do we sustain? The market demands constant delivery of value. There's a free market solution in place at the numbering level. We are incented to innovate because we have substantially impoved that offering. We are jut one element of the metadata movement, and we need to be able to fit into whatever container I clients want us to fit into.

Every discussion comes down to, what is the financial model, how will this activity be funded? We've seen entities rise and fall in the past because of the untenability of the business model. With 1.7 million records, we are tendable. When we see a standard, we say, to us it's a standard that needs to be supported.

Benefits of Uniqueness

Examples of identification ststems: zip code, bar code, ISBN.

Adoption

We have content-neutral platforms (eg. Blackboard), the content providers (eg. Discovery Education), and the hbrid providers (eg. Curricki, BrainPop). There is a network effect to the ABGUID. The number is a GUID. It's a long complex string. It is really dry. That is the number by which these systems communicate. The lines represent real relationships in the market now. Imagine how this network effect can even grow larger.

We see tangible benefits of the AB GUID network: uniformity (accepted communication system for K12 standards), cost savings (monitoring and digital deployment of standards), revenue opportunities (efficient delivery of products) and partnershiop enabled (common link and technical model). IMS members, also AB clients, have already adopted a small piece of the overall solution, the AD GUID.

It's a proven technical model with a sustainable business model, and people actually use it.

(Question: payment)

Someone has to pay for a GUID. No pay, no number.

(Question: is there a computer interface so you can download all the numbers from the .org site?)

There is a search for the numbers. We are open to different mthods, but at this moment the .org is intended to be an inventory rather than anything that is really downloadable.

---

Discussion

(Kevin Riley)

One thing that came up in the break was the dilemmea between the interest o the publishers using the extended version and the interest of the platorms in using the canonical version.

One reason the publishers are so interested in building the extended version of the standards is for remediation, so you can break down the standards into subsets that can be very finely tuned.

It's an interesting scenario, and I readily accept the point, but there's a huge gap, but if you're talking about using the standard for that purpose, what you're talking about is some kind of sequencing mechanism, and to do that, you need a common mode, or an algorithm, by which your going to do the remediation.

You're actually talking about having a common sequencing mechanism for that remediation.

(comment)

There's other approaches which might be a small step toward that. There's no reason you couldn't have a black-box algorthm that reacts to that. You could still have that. The sytem could also have its own proprietary sequence and search algorithm. In the (CC) architecture showed, that's somewhat enabled. So there's a chance for those advanced learning models (but not in the assessment, and not in the curriculum).

(Kevin Riley)

SCORM tried to hardwire sequencing in SCOs, which I don't think was a success. There was Learning design, but it wasn't really adopted by providers. There's a movement to have simpler LD, that can be adopted in CC. The other option we've got, is to use the LTI - that is something that is capable of remediation, but doesn't need to be imposed on the LMS.

(comment)

With sequencing you're painting yourself into a corner. Sequencing works when you've crafted everything together. But when you're lookinbg at larger bodies, it's about scope and sequences, precursors, tc. It would be nice to know the order, but if they teach out of order, they know about it, but aren't prevented from doing this. You can express these things in a useful way, but the real problem is, it is focused on the LMS, which prevents you from doing anything nuanced within the common cartridge.

(Kevin Riley)

In the current cartridge, there is an implied sequencing. There is an attempt to enhance that with the lesson plan. Ultimately it would be nice to have a machine-readable version of the lesson plan where you can describe alternative navigations through the learning material and the instructro can choose from those. Then you have the further alternative of offloading remediation to an external application via the LTI (the attraction of that is you don't need to invent any algorithms within the LMS).

There was a call to revisit Simple Sequencing. We're encouraging LD t support this approach.

(comment)

When you look at it from a larger scale, it's scope and sequence, not just sequence. We have hierarchy issues thagt are interesgting, But we have other scope and sequencing issues that are interesting. We want to ship not just the book, but also the lesson plans. The hope is that these could be adapted into the assessment system. I want the table of contents encoded, and related to state standards.

(Kevin Riley)

We have curriculum models that stipulate what needs to be accomplished by the students. But the job of the college is how to assemble the material to do this. They freqiently break the curriculum into models and present them in very different ways from the way they're presentd in the curriculum model.

(comment)

There may be more than one opinion about how to order this at the macro model. They can all provide their own.

(Kevin Riley)

One way of looking at it, an effort top provide different views of the material in the package. In the Cartridge there is no sequencing. But in the organizations it can be sequenced. But it's independent of the curriculum.

Mechanism

In the resource metadata, we have the curriculum standard metadata, which states:
- the originating authority
- the region to which it's being deployed
- the list from that model that's directly applicable to that resource - we use the URL whereby the platform can derefence the information

Question about how to resolve curriculum references, eg. AB Numbers

Comment: the current state curriulum is loaded with the LMS.

(More discussion on mechanisms - I suggested that it didn't make sense to load the curriculum information into the cartridge, but to rather refer to an external sevrice that maps them).

Comment: let's get some prototypes out, and then we can decide on what's really critical.

My comment: cannot build a requirement that money be paid into the specification (eg., cannot require in the specification that they have to pay AB GUID money in order to map to a curriculum). Because it must be possible to create / distribute cartridges without cost to the provider, so they can be distributed for free.

Comment: we will just have to support multiple providers of these standards. Eg. to deliver content into the UK. ASN has no incentive to go in there.
Read More
Posted in | No comments

Thursday, 16 July 2009

Developer workshop - CCv1.1 and basic LTI v1.0

Posted on 12:13 by Unknown
Summary of the IMS developers' workshop.

Developer workshop - CCv1.1 and basic LTI v1.0

Kevin Riley

Common Cartridge solves a problem that really should have been solved years ago. The real purpose of Common Cartridge is to crack the goal of routinely exchanging content across systems. We took the stance of working with industry - in particular the LMSs and the publishers - to determine which features customers actually wanted installed. The other thing was that it was important that Common Cartridge be easy to implement, not just for platform vendors, but also easy to import content.

The educational paradigm that CC supports is essentially self-directed learning with the assumption that there is an instructor in the loop. We assume they have access to a platform where they can work at their own speed, and a peer group with whome they are working. Finally, we based it on existing standards that have already been widely adopted and were in themselves very stable.

CC version 1.0 included cartdridge metadata - the description of the cartridge itself - and resource metadata - metadata associated with a resource in the package. We basically provided a means by which we could identify who the resource should be visible to, instructor or students. The default was that it would be visible to all. We had intended to use the 'invisible' element from Content Packaging, but this was already in use in different ways by the community, so we couldn't guarantee the results. So we used the resource metadata itself.

There is a whole range of content that can be placed in a cartridge: html, web links, media files, application files, etc. We also wanted to be able to include assessments in a cartridge. We went through an exercise with the publishers, asking them to review what question types they actually used. It became clear there were six basic types: multiple choice (single or multiple response), true/false, essay, fill-in-the-blank, and pattern match. Also, we introduced a discussion forum, and created a schema to introduce the discussion forum and integrate a group of users. And finally, content authorization to protect content - not DRM, but just a way for publishers to check that the licensing is being respected.

In CCv1.1 some changes were introduced. In roles metadata, we created the role of 'mentor' for parent. There is also use of a third role in other areas - the mentor could be an employer, for example. In higher ed, the mentor role might not be used - in any case where the mentor does not exist, content available for the mentor will be made available to the instructor.

Another area related to content in different languages, not only to present in different languages, but also, the content might be in different languages, as in a language-teaching resource. You define a prime language and secondary languages to be refrenced. Also, there was a request to include instructions for users to complete assessments. There is a rubric from QTI we use. Additionally, there is a rubric for the inclusion of curriculum standards (workshop on this tomorrow) across different regions of the world and different authorities. Also, there is the in clusion of the 'lesson plan' in the cartridge.

And finally, most importantly, there is the integration of LTI, which is the subject of today's seminars.

The actual spec defines unambiguous rules for building cartridges, describes additional constraints through the use of schemas, and offers two levels of testingcompliance for cartridges and for platforms. Common Cartridge does not define the runtime - there is an implied runtime, but we have stepped away from explicitly defining the internal operations of the platform. This allows vendors to more easily adapt to the spec.

The LTI incorporates the possibility of linking to services that may be run at the publisher's site. We can define this as just another resource in the cartridge. The same protocol can also be used for accessing eBooks and other kinds of services.

Core Cartridge metadata is defined in Dublin Core and mapped to IEEE LOM. Metadata resides in the imsmanifest file. It is not order-sensitive. Also, any requirement for a specific player by cartridge content must be declared in the Cartridge metadata.

At the level of resource metdata: roles metadata is associated with the resource, and restricts who the resource is assessible to, and restructs access. Curriculum metadata identifies learning objectives, addressed by a resource (samples of roles, curriculum metadata shown).

Conceptually, there are four kinds of content in a CC: "learner experience data", which is your traditional lesson content; "supplemental resources", which is additional material (including extra questions) an instructor can access during delivery; "operational data" explaining how the content behaves in the platform at runtime; and "descriptive metadata" which is the cartridge metadata describing the whole thing.

Compared to content packaging (CP) version 1.2 (CPv1.2) we decided to omit multiple organizations of content. We also removed the CP cartridge version characteristic object, so content package tools couldn't open cartridges - they must be opened by something that is cartridge-aware. Also deleted were sub-manifests, as well as inter-package links (ie., xpointer). Packages really need to be stand-alone in their own right.

We added things as well. For example, the root folder for shared content. Associated content in a learning application object directory. There is a schema for authorization data.

- diagram - links between resources and cartridges, allowed and disallowed

Assessments represent instances of QTI. They can embed any of the question types supported by the CCv1.0 profile of QTI. An assessment can contain a number of attributes, including number of attempts, time limits, and whether late submission is allowed. Cartridges offering feedback must support Yes/No and Distractor. The spec also, as mentioned, allows for a qticomment element (from the QTIv1.2 rubric) to allow for instructions on how to complete the assessment. (demo - qticomment element)

Question banks can be included, as an instance of the QTI objectbank. If one is included, only one can be included. You can embed any of the supported question types. While questions are used in assessments, thy cannot be referenced by other resources in the cartridge.

Finally, the Learning Tools Interoperability (LTI) specification allows tools to be launched and data to be returned to the LMS that launched the cartridge. (demo: basic LTIv1.0 description) It is a new resource time, an LTI call, and within that we can hold a description of how to access that service. In addition, eBooks can also be integrated in the same manner. So we can provide a reference to an eBook directly from the LMS, such that the reference points directly to the place in the eBook (section and page) relevant to the placement in the course. The publisher can therefore put a set of tokens into the cartridge that gives the correct reference into the eBook (this could be opaque, but at least one Publisher - Pearson - will make some references available as parsable strings). eBooks will be offered for this as a hosted service. And some publishers are bulding eBooks into applications, as resources accessible from the application.

There are two forms of authorization: one on the import of a cartridge (is this a valid site, does it have a license), and one on use, which is anonymous, and is authorization by PIN number. It is not foolproof; people can bypass it, but if they do, it is clearly done - it is perfectly visible that they intended to do it. Authoization is via an authorization server - for the import, you go to a single source (the assumption is that the cartridge is covered by a single license).

The Common Cartridge Alliance is dedicated to achieving adoption. We have the specification, and will some have version 1.1, as well as a tool for profiling IMS specifications (ScehmaProf) for profiling IMS (and other) specifications. There is a profile registry, so people can submit profiles and share them. We actually have cases where people independent of us have used SchemaProf. There is a compliance program to help people adopt to the spec (the end-user community is tired of hearing excuses about why they can't import content from different people). There is a collaboration program and a CC forum (the Common Cartridge Alliance website is http://www.imsglobal.org/cc/alliance.html but you must be a paid subscriber to access the forum).

There is a compliance program and a mark you can use for recognized compliant resources (you have to be a paid-up member to comply). There is a directory of resources: http://www.imsglobal.org/productdirectory/directory.cfm Also, JISC has created a 'transcoder' that created common cartridges from other types of resource. http://www.jisc.ac.uk/whatwedo/projects/transcoder.aspx


IMS Workshop Notes - Chuck Severance

Homework (to do while you're ignoring the lecture):

1. Join the developers network (free): http://tinyurl.com/imsdn-forum
We talk about code & move code around, but not about the spec itself. It requires a free IMS community account.

2. Make sure you can edit text documents without messing them up. That is, no MS-Word or anything like that imposes formatting. Use BBEdit, TextMate, TextWrangler, etc.

3. Decide whether you're going to use Java or PhP (or both). Unless you are an experienced servlet developer in Java, use PHP.

4. Install PHP - use XAmpp

5. Unzip the PHP from the DVD into the directory and test your install. From
http://code.google.com/p/ims-dev/

Get the latest handout: http://www.dr-chuck/ims



Learning Tools Interoperability

We have htree major types of LTI:

- Simple LTI - not a formal spect - May 2008
- Basic LTI - launch plus outcomes part of IMS common cartridge 1.1
- Full LTI - end of 2009

Basic LTI is a profile of LTI 2.0 - the focus is on launch and LTI integration. As we saw yesterday, the user experience is to click on the link and see the tool. The LTI tool is a proxy tool inside the LMS (so the link is internal to the LMS) which posts a form with learner and a secret and then forwards it to the vendor site, which verifies the secret and sends a cookie with a session and a redirect, which opens the service.

It's very simple to write for; it's simply REST-based. You basically get four kinds of information sent on every request:
- the LTI version number
- resource link ID - the LMSs representation of the resource
- user ID - an opaque string, from the LMS, with no identifying information
- roles - we just take the roles from LIS (Learner Information) - assume that tools key off 'instructor' string, otherwise you're a non-instructor
- a bunch of data pieces from LIS (we used underscores because many systems don't like dashes): lis_person_name_full, lis_person_contact_emailprimary, lis_person_sourced_id, context_id, context_title, tool_consumer_instance_guid (this is the instance of the LMS, your LMS's domain name), tool_consumer_instance_description (this the description of your LMS).

Some of these data elements are optional. For example, an LMS might not decide to send the person's name. The whole LTI is designed to be 'sandboxable'. The absolute minimum ser is LTI version, resource_link_id, and (maybe) context ID.

Basic LTI in Common Cartridge: the basic idea is that, in the Cartridge Use case, the publisher wants to point to something:
- so you need the URL
- then there are custom parameters (not to be messed with by the LMS) - the LMS parameters are under 'extensions'. They can embed context information, version info, etc., whatever they want. The idea is, whatever they put in here, they will get back when the cartridge loads (if they need to ensure it's the same, they can encode it).
- the extensions are generated by the LMS and are namespaced by the LMS. For example, Sakai might send the frame-height, or some other LMS-specific information.
- vendor - some human-readable info about the vendor.

The basic LTI security model is based on OAuth. See http://www.oauth.net It signs messages using time-stamp, nonce. We use trust between pairs of servers. Maybe one day there will be three-legged OAuth in order to support identity servers or third-parties (like Twitter). We're not going to try to communicate to you through some kind of third party identity who the user is.

The tool decides the level of security. It may require all the security values, or none of them. So, for example, it could check time skew (how old the request is) and nonces (there is literally no way to alter them, so there's no man-in-the-middle problem). See http://www.intertwingly.net/blog/1585.htmlOauth depends on reasonably long secret - a 4-charactr pin isn't very secret. You want them long, ugly and nasty. Right now, they are hand-delivered ("passed out-of-band"). There are two levels of secrets: the site-wide password, and the resource-level password: an individual license or one-course license. Note: ou don't have to have either or both.

The LMS admin calls the tool provider, the provider generates a secret, and keys it in over the phone. Inside the LMS there will be some list of providers and passwords (with an editing screen to change them). The resource-level secret is assigned by URL, resource key, and secret. The LMS Secret implies that the user_id and course_id values are the same from launch to launch. But with the resource level secrets, values such as user_id and course_id in launches should be modelled linked to the resource or key.

(Description of OAuth - *way* too fast to follow if you didn't already know the spec - I'll cover this in a later post).

OAuth implementation patters: we create an array or property bag of parameters and pass it to OAuth to sign. Then we send the signed data. The tool receives the post, calls OAuth to pull the data from the request, and asks OAuth to validate it. OAuth calls your 'store' for lookup kys, etc. The pattern is, you put all this into a form, and then you press the submit button (you have to deal with the case where Javascript is turned off, so you have to include the submit button in the basic configuration, or a bit of clever Javascript that will emulate a button-push). You hand the form to OAuth, which processes it, and if it is approved, you do stuff with it. If it is rejected, you return to the launch_presentation_return_url.

Basic LTI depends on IMS Tools Interoperability for support of outcomes. It's a very simple message signature. If my LMS supports outcomes, I include a field that is defined to support outcomes. All I send is the outcome and the tool launch ID. The security is separately established between tool and LMS. The tool vendor has to receive out-of-band authorization to set data (send outcomes) back to the LMS. Very simple, not very complete.

I (Chuck) added an appendix on LTI outcomes recommendations. There may be some discussion on this - conversation in the room on how to report outcomes (eg., grades). Maybe some XML? (Sounds like Wilbert speaking). We're not replicating CMI or SCORM tracking. If people what to get together and exchange data, fine. The work for outcomes in the full LTI is still under development.

The sample code is available at http://code.google.com/p/ims-dev/
org.imsglobalorg.basiclti.BasicCTIUtil.java
blti.util.php

It is wide open, and you can become a committer to this code; I will probably change the Apache committer document and change it to IMS.



Afternoon (Chuck Severance)

Working with the sample Java and PHP implementations.

http://code.google.com/p/ims-dev/

Use my oauth code, not theirs - I have caught flaws in their code.

org.imsglobalorg.basiclti.BasicCTIUtil.java
blti.util.php

There's no LMS-specific code in there, and any LMS can use this code.

There are four basic methods in the code:

- validate - take a look at a descriptor out of a cartridge and validate it

- parse - parsing the descriptor - looks at the XML and extracts the data

- sign - signing method - merge these properties with LMS data, secret, etc. Returns properties with oauth junk in it.

- post - pass the properties to (??)

Some activities are using some code. A building block from Alan bery, U. Amsterdam. Stephen Vickers, eg. http://projects.oscelot.org/gf/project/ ...

Also - JSR portlet for Sakai (maybe available in version 2,7) and planned for Moodle 2.0 (and a backport for 1,9 and 1.8) - developers are working on it. Or Pythn - www.tsugiproject.org - my take on a personal learning environment. It talks a number of sifferent protocols - simple LTI, basic LTI, facebook and Google.

(various other exercises and tests - sorry this is pretty random at the end here, but we went through all the code - I'll analyze and have more on this in the future - I have working versions of all the code and have been through it messing around with it.).
Read More
Posted in | No comments

Wednesday, 15 July 2009

IMS Learning Tools Interoperability

Posted on 13:27 by Unknown
Another blog-summary from the IMS conference. This talk was pretty quick - I'll have a nice slow look tomorrow and may be able to post more.

Chuck Severance

We got a look at Blackboard 9 proxy tool patterns, and built upon that. It talks about how to put tools inspecifications and to launch those tools. It defines how a tool is launched from the LMS, passing roster information.

Along with the sopecification design has been the development of code. The end point is to get it in the marketplace, rather than to have the standard - it is probably even better to finalize the standard after it has been used for a while.

The core thing is identity. We can use iframe, but we want to pass information along. Also, it enables the provision of learning from LMSs across multiple systems. It means you don't need to implement one of everything in an LMS. This allows us to make exciting things without sticking them into Blackboard, Moodle or Sakai.

When combined with Common Cartridge, this becomes very interesting. By having single signon to access services (and maybe make publishers some money) the cartridges can be very small and mobile. We don't need to send the giant flash file all over, even if there's something in it we wan to protect.

Eventually, LTI will allow even tools in one LMS to be used in a different type of LMS. Where we get our content, and even what language it's in, becomes less and less relevant. We don't have to bring everything into the course.

I created a spec, nothing official, called "Simple LTI" - I went out and posted it, no password necessary. So I've been writing code for a year and a half. But writing code helps me understand what the problems are.

Basic LTI is a profile of LTI of LTI 2.0 - we expect it out in July 2009. And LTI 2.0 by the end of the year.

- demo - the user experience - the user clicks on the tool, and poof the tools shows up

The tool is obtained from the LMS, it is signed using OAuth, the producer receives the form, it validates the signatire, the to provisions the user, course and profile as necessary, the tool is sent back, and then it is sent and displayed.

We have some stuff in production - content integration: McGraw Hill sells them directly to students via common cartridge LTIO integration. Another: Moodle - uses LTI to contact Google, which uses SAML to ask who is in the course, and then the lot is transferred into Google Docs. Use Googl app engine to accomplish that. Another example k12.com

Basic LTI - created 48 hours ago - and a version is in production now. Basic LTI for powerlink is going be open sourced tomorrow.

Also -- www.tsugiproject.org

Also - Pearson LTI 2.0 - pre-release. They have ben working about 6 months on full automatic provisioning. You will go to some resource, click the button, "Add to my course" - just like the 'add to Digg' buttons.

Now, basic LTI is being added to Common Cartridge, so we can add these links to it. If you think of Common Cartridge 1.0, only a fraction of it can work in the LMS. Most of it is just some hacking back into the publisher servers. It's ugly, nasty and prone to breakage.

Common Cartridge 1.1 doesn't expand too much, but we standardize the links back to the publishers. No more hacking. So even though publishers will continue to do things the LMS can't do, we can meet in the middle using standards, and pass the data over.

There's kind of a hybrid model where we can imagine some sort of external player that can't be run in the LMS but can be stored in the LMS so the publisher doesn't have to run the player every time. (SD - an example of this would be a widget engine).

Eventually, you see the line move, so more and more of the stuff (learning design, for example) can be rendered by the LMS.

4-minute technical ovrview. LTI is a bunch of course data, roster and user information, etc. If your passing data and you're not using OAuth, you're a fool. It's the practical security for REST. Google uses it. twitter uses it. www.oauth.net

Three-legged OAuth is truth between two servers and a user. We do not do that - Google wants us to do that. Googles very pushy on this. So is Yahoo - to put course information into groups (don't tell anyone I said that).

demo - 8 line common cartridge with LTI

Basic LTI + CC - sample code - all available http://code.google.com/p/ims-dev

(awesome)

demo - a bunch of CC - LTI stuff from the tool.

Tomorrow - detailed walk-thoughs of the spec. Heh heh heh.
Read More
Posted in | No comments

An Overview of Common Cartridge

Posted on 12:38 by Unknown
Continued coverage of IMS. There's an IMS Common Cartridge testing tool, but they don't want to enable access without registration.


Kevin Riley

We got a bunch of content vendors and providers together to see what they supported, based on customer demand. We found quite a lot in common. The Common Cartridge format is based on fairly mature technologies that have been availble for a few years.

The features in version one:
- we have cartridge metadata
- also, resource metadata - we wanted to be able to earmark resources, to state whether it's visible (or not) to an instructor or learner
- there support for rich content - med stuff, media files, but also application files - eg. MS Word
- integrated assessments - we used th question types that are actually deployed: multiple choice, true/false, essay, fill in the blank, pattern match
- discussion forum integration
- authorization for protected content - not DRM but something very light

Common Cartridge enhancements, version 1.1:
- extends role metadata to include a mentor
- great demand for multilingual support; we've adopted the alt.variant element recommended by the accessibility group
- for assessment, we included instructions for completing the assessment that could be viewed by the user - the QTI rubric was integrated for that purpose
- integration with LTI (Learning Technology Integrattion)
- association of learning outcomes with particular resources
- human-readable lesson plans, specifc to the instructor

The building blogs of the specification are:
- the Dublin Core element set as the basis for the carridge
- LOM as the in formation mode for the cartridge metadata, based on LOM schema loose binding
- packaging - is a profile of Content Packaging version 1.2
- testing - IMS QTI 1.2.1 (this was based on what publishers are actually using)
- IMS authroization web service 1.0
- basic LTI, which is a provile of IMS LTI 2.0

What does the spec do: it defines a concrete unabiguous set fo schema for building cartridges, and describes additional constraints to be applied to the schemas, leading to two main lvels of complianc and a third for cartridges containing only QTI.

We don't define a run-time -- that is entirely the province of the platform. What we define is the data expcted by the platform; the platform has to deliver them.

- diagram of LMS run-time environment

Potential future:

- integration with web applications
- also with e-books
- also with services in their own library

Common Cartridge metadata resides in the manifest. It is not order-sensitive. Any content included in the cartridg requiring a special player must be described in the metadata, so users are forewarned.

Resource metadata optimally restricts who a resource is visible to. Curriculum metadata identified learning requirements. There has been much discussion of this. Instead of tagging the cartridge directly from tokens of values of a curriculum model, we refer to the originating domain, identify the coverage or region, and then for this specific resource we list one or more URLs of standards relevant to that resource.

There are conceptually four difference thypes of content that can be in a cartridge:
- learner experience data - the learning material, what we would normally think of
- supplemental resources, that can be pulled in by the instructor
- operational data - eg., authorization
- descriptive metadata

There is a content hierarchy:
lowest: HTML, application files (Word, PDF), static content
next: web links - content accessed from the web at runtime, which can b updated after the cartridge has been distributed
next: XML - eg., QTI, etc. which can b managed by the LMS
highest: LTI - other technologies - ebooks, virtual labs, multi-user simulations, etc.

The content we don't use - aftr a vigorous use of Ockham;'s razor:
- multiple organizations
- CP version-specific objects
- sub-manifests
- inter-package linking

Things that were added:
- a group-level folder that shared content had to be placed into, to avoid ad hoc links
- in a Common Cartridge there are no learning objects - there are learning application objcts

- diagram - schematic of common cartridge, taken from model of content packaging

- diagram - cartridge web content links - illustrating how cartridge can refer to shared resources, but not each other

Only one question bank is permitted in a cartridge. There's no order in a question bank. It only supports the limited question types mentioned above. The question bank cannot be referenced by any of the resources in the cartridge.

LTI - is a means to launch or reference an external tool, but also to gather outcomes. Basic LTI v 1.0 is hot off the press.

Also, eBook integration. We have publishers selling books to learners, and also making cartridges availabel to isntructors, to help them use the book. The professor installs the cartridge in the LMS, and then via LTI the LMS can access the book. So now we can (eg) refer the learner to the particular section in the eBook relevant to the place they are in the course.

The way we do this is via the hosted service for eBooks (because you can't guarante the student is at the right machine to access the downloaded eBook).

Finally, there are two forms of authroization: authorization on import - is this a valid site, do they have a valid license? And then, is there authorization on use.

How do we achioeve adoption? We have the spec, we have a mchanism for regional and national profiles, and we have a profil registry and a compliance program. We also have a mechanim for support and collaboration, for example, the CC Alliance.

On the profiling side: education is nver going to be uniform around the world. So our standards have to support local practice. So we have to allow customization, even if it's only around language and vocabularies. In some cases this may require functional change, but we don't encourage that.

The compliance program - members of the alliance who pass the test can use 'compliance' marks. It is a self-test. We have an automated test tool (downloadable free of charge). Anybody can test a cartridge - there's nowhere to hide. On the platform side we have run-time tests that are designed to exercise every known functionality. Also, we have some known-error cartridges.

The cartridge test tool is run on the destop, run by Java, available on tn home page of the IMS website. You can batch-load cartridges for testing. It generates a test report for each cartridge.
Read More
Posted in | No comments

The Future IMS Learning Design

Posted on 11:43 by Unknown
More blog coverage from IMS in Montreal.

The Future IMS Learning Design

What is the future of learning design? It has been around seven years. But still it disseminates very slowly. Is it time to revise or reinvigorate the specification? Should it be combined with packaging, such as common cartridge? What kind of support is needed for its dissemination?

Joel Greenberg

My own personal view: maybe the world is moving on, and maybe the idea of 'design' is a bit old fashioned. I have been looking at the world of social networking, and my mind is moving away from the model where we are the experts and package the knowledge and sequence it.

Academics love book, and they're very into narrative, heavy narritive, even light-weight narrative is a challenge to them. It's interesting, because they all use techn ology, but there has been a lack of interest in using technology in their teaching. A lot of them ask, what's in it for me? They get more credit for publishing a book, a LAMS sequence wouldn't even be on their c.v.

The overall project: a generic description of services, a methodology for adding those services, and software that suppots this approach. Which led to SLeD protypes, etc. You can Google it.

Conclusions: issues about efficiency, limitations of the generic approach (it gets very complex), complexity of generic serving descriptions (Symbian. They asked, "What would make it work?" I said, if it's no mor difficult to use than PowerPoint. Which it isn't.) Does it scale? Also (Martin Weller) the content gets bound up withn its description. And finally, the difficulty of integrating tools.

Instead of trying to systematize, to sequence, a more appropriate approach is based around patterns and connections. I would question the LAMS / Moodle thing; it looks like overkill. It looks like a huge overhead just to sequence the use of tools within it. It just packs everything up and tells peopl when to use it.

We had a problem. We couldn't get academics to use any of this stuff. But they all have Word, so we looked at that. They were working to schemas. We found that over 80 percent of OU material could be done with one schema; they were changing it (the schema) to change it, not adding value. So now we have this whole system all based around Word - structured interface, graphics from repository. The idea is to lead academics to a lightweight narrative, and having them design around it. Then (what they liked) they can render it in PDF, Moodle, etc.

It took five years to get academics to use this, and it took an edict, "you will use this." The same academics that use templates produced by publishers.

This approach has been pretty effective. Something like 6000 hours of work produced using this stuff. So I'm less interested in the pedagogy, I'm more interested in the process. My experience is that your standard academic simply won't touch this stuff (learning design).



Mile Halm

I can tell you that my feeling is very similar. I recall being in the room when we decided we would begin working on a learning design spec. It was started in 2001, and the main spec came out in 2003. It has produced some research projects, but not mainstream implementations. It's very powerful, but also very complex. We won't find any faculty willing to adopt these tools.

This idea of a community: it needs a community to be successful. A more open, collaborative support environment has potentioal to lead wider adoption. But we (IMS) need to be more open. We write these specs, but there's no adoption.

So, what is the future of learning design? Similer and lighter, things that kids can do, like YouTube. Interoperable - the widgets are a good example. It needs to be more mobile, interactive (service-orientation), flexible and extensible.

Wilbert demonstrated Recourse. Very visual, contains a lot of information, hides a lot of complexity behind, so the user doesn't have to worry about it. The LAMS tool, very similar. But again, I don't know whether faculty will use this in the higher ed stange, maybe the K-12 stage, where there's a much greater emphasis on outcomes.

Prolix: from .lrn this project again focuses on simplicity. That is again what we must concentrate on if we want people to use these tools. Anyone should be able to create these things. That's what we see in web 2.0 - everyone is a creator. And we need to design learniong content in order to make that happen.

Support needs: this is our second project. The first was open source in name, but not in how it operated. We didn't have contributions, we had a grant and we went out and did all the work. This time, we looked at the process, how to engage the community. This time we're getting a lot more activity out of the partners. This idea of creating learning design communities, interacting in real-time -- getting an email here, aftr a month answering a question: that's not what I'm talking about. I mean, IRC, someon has a question, someone from the community answers (rather than us providing the answer for every sinnle person).

On of the things missing from a lot of the open source projects is documentation. If we have to do something in open source, we'll write it down. This has created a lot of trafic on our site. Documentation has a lot to do with creating a useful tool and supporting implementation. Pretty soon you get a rich resource for the entire community. Not just technical: eg. how do we get reuse out of the design itself. Also: how do we create plugins and extensions?

Take the learning designs pec: how can we help people create a very quick implementation that other people can use.

- demo - Weblion Wiki - user created content and support

Somthing like this for learning design would be really useful, and if we made it open, then anyone could contribute content. A user community would be a tremendous was to think about how to support a learning design implementation. As IMS we need to think - not just about learning design but other stuff too - about this, so you don't have to come to a conference to find that there's stuff out there.




Gilbert Paquette


My view is a bit more optiminstic, I don't know why, because we've been working on this for fifteen years, but we're working on hard problems. Learning design has had more of an impact on workplace learning. In corporations, you have to distribute lots of information, if you want to keep your people, and so you have to prepare thinsg in packages, so we're inclined to try to prepare things.

The web it our learning platform, we have to remember that. It's not to prevent the social web activities. But the instructional design is the most improtant part of learning. The insructional designer sees many paths that students could use. It's more than a qustion of interoperability, it's a question of activities, and interaction between actors.

We all agree, I think, it's vry slow adoption. The tools were form-based, not very user-friendly, not many methodological aids, and still no LD repository. And since the specification we've had web 2.0 and web 3.0 - my group has written IMS about the weakness of collaborative activities. And we're seeing competencies, and the specification is weak on that.

Four lines to extend LD access:

1. Simplify the authoring process: simpler visual modeling, visual pattern repositories, design scenarios and aids. Once question: should the teachers be their own designers? Should the larners be their own dsigners? We can see, in some scenarios at least, designers using the tools to create the designs.

2. Provide a learning design run-time engine for interoperability.

3. Profile or simplify the IMS-LD spec

4. Extend the web to social network and web 2.0 applications and contexts.

Simpler Visual Athoring - We saw examples of that today, eg. editing simplification in TELOS

LD Executable pattern rpository - if we had a good repository, with the best instructionaldesign minds around the world providing things, then we wouldn't hav dificulty covincing the teachers to use them.

Run-Time engine: the idea was to delegate CC sequencing to an extrnal tool that plays an IMS-LD file if it's present in the cartridge.

Simplifications: we need new levels in the specification - a new Level A could integrate the most useful Simple Sequencing components. Also we need a way to bring in services. and collaboprative dsign.

Conclusion: it has been six years now we have been working on the LD specification. We need a new group, not only to simplify the specification, but also to integrate the specification into actual practice. If you are interested, send m your email.

Discussion

Wilbert: (very soft, something about authoring a spec and then just running it) The question is whether we can ever build tools good enough to run them.

Phil: all the people in this field are not in the main-line course production.

Gilbert: still, you ahve some LO repository, if you have incentive, if people contribute, it is recognized. The same could be for larning design. But I agree, it's more instructional designers than are inclined to do this. That's why OUNL, or us, are more involved in this.

Mike: what we've done at Penn State, we have developers at every college now, because if we put our courses online, we get back some of the tuition. But the stuff the;re developing is still old school, essentially a page-turner.

Motts (?): I remember 2003 or 04 sitting in a room with everyone in LD at the time, coming up with the view that the spec had split personalities, with no agreement on what the spec is for. I'm seeing the same split today. Is it a way to create courses? Is it a means of exchanging complete courses complete with thir pedagogy? Is it a way to exchange pedagogical practices? Probably not - we know how hard it is to ven get colleagues to reuse their own work. We need to home in on what is really important about it. Bucause otherwise I am inclined to agree with Joel, that Google Wave will do away with it.

(Comment): what keeps crossing my mind, what is the mcehanism to test the effectiveness of this pedagogy after you've pushed the learning through? How do you know your learning design is not flawed? Does it integrate with something else?

Phil: in the world of social networking, students would rate it badly and it would disappear without a trace.

Guillaume: I am not sure that we have drawn all the needed conclusion from IMS-LD. I'm not sure we are clear what it was - a modeling language, an interoperable spec? It is probably time top steer the committee. We need to have all the lajor actors taling about how to implement the lowest-cost learning deisgn. But I think we also need a pedagogvical language to communicate between teachers - but maybe that's another part.

Gilbert: it's a question fo the egg and the hen. If you don;t ahve a platform, people won't start doing it. And if people don't start doing it, then commercial vendors don't extend their platforms. I agree, setting the goals - maybe the spec does too much.

Guillaume: maybe we should abandon the completeness of IMS-LD, to model all pedagogies.

Chick: it seems a comflict between nstructional designers and teachrs. The spec is designed for instructional designers. But as a faculty member I will never use LD. I watch. As soon as you say "and then you draw a line" I know I will not use a specification. Because then you're not programming any more. I would fiddle with things. I want to be able do my could without drawing lines - I will not draw a line because I'm not a professional learning designer, and that's not how I think. It needs to be designed more along the lines of the way - we create content and then add commentary. We may be elegant at the end, but we're not elegant at the beginning.

Mike: the faculty create online lctures,m not courses

Chuck: my view is, I own my course, I want it to be as messy as posible

Gilbrt: perhaps the spec was too revolutionary. Your approach is very conservative. Perhaps they should learn some pedagogy. They teach in classes, they think they know pedagogy. But all they do is giv information, telling telling tlling. This is zero for learning.

Chuck: this is a language problem.

(Comment): It's a language problem, and it's a results problem.

Mike: in k-12 they're driven by outcomes - proof of learning - perhaps if that were more the csase in higher ed we would see more of the samne.

(Comment): learning design csn be interpreted in so many ways, and that's what makes it difficult. One view - LD is simply to describe pedagogy. But it was viewed as doing to much. And as technology, it's crap. It doesn't do what it was designed to do.

(Comment): I want to respond to the idea that l;earning outcomes are not important in higher ed. In the U.S., that is their mantra these days. I think the time is ripe to focus on outcomes.
Read More
Posted in | No comments

Learning Design Tools (Demonstrations)

Posted on 09:40 by Unknown
Continutaion of IMS 2009 in Montreal blog coverage.

TELOS - Gilbert Paquette

IMS Learning Design (IMS-LD) is a bridge between design and delivery. The goal was to provide a containing framwork of elements that can describe any teaching-lanring process in a formal way. It is also an "integrative layer" for other specifications, such as LOM.

The main improvement compared to SCORM is that you have multiple learners and multiple roles, so you can have collaborative learning scenarios. There are also more advanced personalization possibilities. But at the same time it is more demanding - it requires support for educational modeling; you don't just have tree-like structures. You need simpler tools and methods. And you need repositories of LD patterns.

The IMS-LD moel defines an activity strcutire ina 'method', along with persons and roles, that perform roles. The activities are performed in an environment composed of learning objects and services.

It is designed to address various needs, for example, K-12 lesson plans, higher education learning patterns, or workplace training.

TELOS isn't exactly an LD designer and player, it's an educational modeling designer and player that exports to IMS-LD.

- demo on screen - TELOS desktop -

TELOS is an e-learning operating system, which is service-oriented and ontology driven. One of its main features is a visual scenario editor and a multi-actor scenario player. It has a resource manager with resources classed by a technical ontology. These resources are fed into a graph, where they are assigned certain semantical properties. These are then fed into a task manager, where the user interacts.

The TELOS visual language emerged from MOT+LD, IMS-LD and the Business Process Modelling Notation. As noted, it exports to IMS-LD. Users design a learning model through an executible graph - that is, they edit the learning scenario visually.

Demo plan:
- resource manager and repository
- scenario editor
- task manager

- demo on screen -



Wilbert Kraan - Widgets, Wookie Server and Recourse

I work for the Univrsity of Bolton; I'm here with two hats on. One is th service that we provide for the JISC - CETIS. We were activity involved in the specification of learning design. And the other thing is participation in the TenCompetence project, which I'll talk about today.

Context: TenCompetence is a big European project coordinated by Rob Koper of OUNL. It runs for four years ending November 2009. The idea is to create an infrastructure for lifelong competence development (so 'runnable' is important). The infrastrcuture is open source and (as much as possible) standard compliant. IMS-LD was a ky enabling technology. So we got new tools for running learning designs.

IMS-LD has extremely generic services. You can connect with discussions, votes, etc. The trouble is that LD simplhy enumerates them - it says you can have a chat, but not how or what or why. So if you exchange a LD from one institution to another, the service that it was dsigned for might not be available at the reuse. So we needed a new approach - we extnded IMS-LD with parameters to call widget-based services. So once you integrate IMS-LD with the widget serving platform - Wookie - the same services are available everywhere.

This was pretty challenging, and had pretty interesting spin-offs. For example, Wookie is a stand-alone thing.

Now there's lots of widget engines out there, but what they don't do is share state, so they're not very good for collaborative activities. For leanring, you want to share the state of a particular widget to a group of people. So, eg., your chat post is propagated. Or your vote is sent to a group of people.

We took as a starting point the W3C widget specification, which is now being finalized. Our contribution to the spec is the cooperative extension. Then Google came into the picture, and it Google Wave was almost the same thing. So we adapted Wookie so it now handles Wave gadgets.

So basically, you have a wookie server, and it takes care of a bunch of things - a chat forum, weatherm, etc. And it takes care of the state of each of these widgets. You can look at the widget instances from the server, choose the one you want, and instantiate it in your context. So you can have stuff like some of the things from Apple Dashboard, etc.

- demo of some widgets - esp. chat forum and vote running in SLeD Learning Design Player -

There's a plug-in for each containing platform, eg., there's a plug-in that allows Moodle to talk to the widget server. So at run time it casks for information that identifies participants (anonymized) - the individual user can be associated from Wookie back to Moodle.

Within the application, you query a Wookie server, which displays a list fo the widgets available. So in LD, you define an activity, and then associate it with a particular widget.

Version 2.0 of the ReCourse (LD) editor has been released.

Work in progress in cludes Astro, which is a new version of the LD player. It uses the CopperCore player. Each activity is instantiated separately. Then you can jump in wherever the LD allows you to. And then you access the particular service, which is to say, the widget.

That was the problem: we needed services in LD, so let's use wdgets. So then we said, we have widgets, let's just use that. But then we came around full circle and said, if we could script those widgets, we would have a learning activity. And that took us back to Learning Design.

- demo - Recourse Editor build on Eclipse -



Ernie Ghlgioame (?) too tiny to read - LAMS

At its core, LAMS orchestrates learning activities by allowing teachers to crate, run an monitor sequences of activities.

- demo - LAMS dashboard, version 2.3

On the left hand side I have all the different activities, and on the right a blank canvas. I drag and drop into the canvas, then create sequences by drawing arrows. I configure the activities by double-clicking on them; this opens a forum that allows me to create the content and to define behaviours. Designs can be saved, and I can preview it from the same view students would see.

- demo - examples of sequences

Then I can actually share my designs with other teachers. So, eg. we can create a copy of the activity we have just designed - a 'run'. The students get actual links to the design.

- demo - run

I can look at the design, and actually click on an activity and see what a particular use has done in that activity, what he has contributed. Also, we have been looking at the concept of time in activities. Time, compared to output, won't tell you much. But we're trying to explore it.

There is also the 'stuff happens' button. We may have made assumptions that were wrong. So you can change not only the content but also the sequence. But this can apply only after the point where the student has gotten the furthest is. So as not to confuse the students.

We have several integrations of LAMS - Moodle, Sakai, several other LMSs. And we are trying to incorporate other activities that are not necessarily LAMS activities within LAMS. Eg. you want to be able to offer Moodle ativities when in a Moodle environment. And we can export the sequences so someon else using LAMS Moodle can run it.

- demo - setting up Moodle Forum in LAMS

And then I can get the output of a Moodle forum to do branching as well.

Question: in Moodle, how do you control the flow within the activity

The tool brought into LAMS is basically the tool that does the workflow control. LAMS is a wrapper around the tool, and uses certain hooks. So., eg. you can have Moodle widgets running within LAMS activities.

- demo - Waveword Widget in LAMS

And that's it!
Read More
Posted in | No comments

Standards for Pedagogically Relevant Learning Environments - Where Are we?

Posted on 07:52 by Unknown
From the IMS Global Meeting in Montreal, summit on Standards for Pedagogically Relevant Learning Environments.

Where Are we?

Gilbert Paquette

The main reason we do repositories is that we think we can do better quality of learning resources.

The key connecting standards we are working on include Common Cartridge, Learning for All, Learning Design, etc. Common Cartridge aims to emcompass SCORM. There is a common content package, a metadata standard - we will talk about Dubmin Core, LOM, and ISO-MLR (a new standard).

For Common Cartridge sequencing IMS is looking at Sinple Sequencing, but also at Learning Design, and possibly how to join the two. In the future, IMS wants to implement progressively in Common Cartridge other standards for different goals, eg. Content Authorization Standard.

When we look at all of these standards, lots of them involve interoperability, esp. on different platforms. These do not change dramatically the quality of learning. Standards we will talk about later - Digital Repositories, Learning Design, will have more impact.

The first subject, then, is GLOBE - a network of learning repositories.

Why are we interested in learning object repositories? First, because they support resources maintained by institutions, and professors maintain their qualify through peer review and actual use. Additionally, the metadata provides valuable information about authors, subjects, etc. This enables focused queries. And the vast majority of material will be reused.

The first panel:
- Gilles Gauthier - convener at ISO SC36/WG4
- David Massart - LODE co-leader and ASPECT project manager
- Gilbert Paquette - chair (Tele-university)

Questions: are the days really over for 'one metadata for all'? What about the multiplication of profiles? Should we promote a new interchange method? Is there a need for a new standard encompassing other standards? What about the new standards, such as ISO-MLR? New projects, such as competencies?


Gilles Gauthier - Metadata for Learning Resources

The MLR is the standard development that is going on at ISO and is almost ready. It's not a big standard. Many of the people in this room work on ISO and are editors of this standard.

Gilbert asked questions. First, are the days of 'one standard' really over? My answer is 'no'. One standard is nice, if we take out the RDDF triples. Is the multiplication of profiles a problem? No. Is there a need for a new standard encompassing all approaches? Yes?

ISO/IEC JTC 1/SC 36/WG 4

ISO - International Organization for Standardization - is composed of technical committees; there are hundreds of them. The JTC is a joint technical committee with IEC - Intrnational Electtrotechnical Commission. JTC addresses standardization in the field of information technology. There afre different types of members - participating, observing and liasion.

Participating members are each representatives of countries, only one vot per country. Subgoups, called Working Groups, can also include experts nominated by participating members. There is a very clearly defined process - proposal prepatory, committee (where the voting starts), approval, and publication. The idea is to achieve consensus on a technical document.

SC36 is the committee on learning technology. SC37 is in biometrics. SC2 is the committee that produced US-ASCII and Unicode, etc. Under SC36 there are WGs (WorgGroups) including different learning technology standards. The members of SCC include national federations. Liasions include organizations like AICC, ADL, etc.

Working Groups have projects; WG4 has 9 projects, 6 related to MLR and 3 related to IMS content packaging. SCORM 2004 3rd edition became a technical report from this group.

Projects:

MLR 1 - framework. This was the hardest, defining the process for the rest.

MLR 2 - core elements. Essentially, this is Dublin Core.

MLR 3 - core application profile. This is an application profile for the core elements.

MLR 4 - technical elements

MLR 5 - educational elements

MLR 6 - availablity, distribution and intellectual property elements

CP 1 - information model

CP 2 - XML binding

CP 3 - best practice and implementation guide

Various resolutions enabled this work. One major resolution was to maintain compatibility with IEEE-LOM.

The scope is restructed to specifying, in a rule-based manner, metadata elements and their attributes for the descriotion of learning resources. This is a multipart standard, with different components. Multiple languages and multicultural requirements are supported - eg., you only use neutral identifiers., non-linguistic identifiers. All languages are equal. Also, it is intended to support multiple levels of granularity, and to support user extensions.

From a centralized resource to a distributed resource. Centralized, only one body can provide metadata. But metadata on the web can be provided by anybody. You search for the metadata, collect them, filter them, transform them, store them, explore them. So you can get from those RDF graphs MLR records. The MLR view is the centralized point of view.

The idea is to be able to specify data elements - to define what kind of data you want to have. The central element is the respurce. In the MLR all data is simple - all triples - if you want to have something complex (like a vcard) it means you want to have a resource.

Data element specifications - each has a unique identifier, each specifies exactly the possible values, each has human language equivalencies.

Data elements spec ID:
Subject: resource being described - resource ID
Content value: Fodors
Language value: eng

Resource identifier: ID
name: name
Definition: the definition
SubclassOf: other resources
Note: note

If you have a set of MLR elements, you can create an RDF binding. And if you use an XML binding, you get back something that will look exactly like an IMS-LOM record. The MLR is quite simple: it tells you how to define data elements, and how to define application profiles.



David Massart - Learning Object Exchange

I work for European Schoolnet, a network of 31 ministried of education in Europe. It is dedicated to improving learning through technologies. One of its projects is a learning resource exchange - we want all resources in Europe to be accessible.

The LRE is a service for ministries of education driven by them. With 31 countries, many languages, etc., we want 'content that travels well'. Think of music - whatever the language, it travels well. Compare with a lesosn plan in English, for example.

For the purpose of this discussion, a learning object is any resource thaqt can be used in learning. For any resource,
- you need metadata to describe it, also to assess it.
- You need the information to determine whether you want to get access to or not.
- Then once you decide you want access, you need metadata to help you use the resource.

The LRE garters metadat from ministries of eductaion, publishers, museums, teachers, etc. and makes the aggregated metadata available to LMSs, portals, etc. Various standards and protocols are involved: SOI, SPI, OAI-PMH, etc. We try to get metadata by all means, and to provide it by all means.

When you try to do that, you have to really take care of your metadata. There is a lot of negotiation, we try to agree with a provider as to what metadata must be. We control for quality and sometimes reject it. We need to correct and compile metadata, to complete the metadata records. We use an internal format:
- identifiers
- language blocks, eg. title or description in English
- indexes

We have the problem of identity. When we get a record, we need to determine whether this is a new resource, or one we already have, to update the record. Idenification of resources is key. We need to be able to uniquely identify resource - difficult because of political saspects. If you control the identity of the resource, you control the access to the resource.

IMS Learning Object Discovery and Exchange - deals with how to find repositories. There is a regitory of repositories and protocols. Learning objects can have different versions (English, French), different formats (SCORM, CC), copies, etc. This is important if you need to use an object, you need it in a format you support, in a language you read. All of these are contained in Information for Learning Object Exchange (ILOX).


Discussion

Gilbert: We have a diversity problem - it's difficult to federate, harvest, etc. And it contradicts the idea of freedom (to have one standrd), yet we want to interoperate with others. Let's suppose people are using MLR, or ILOX, how will it solve this problem.

David: I don't see diversity as a problem. We have different communities with different needs. And we are simply trying to meet their needs. You define something that is historic in your community, you start to use it. Then you interact with other communities - you negotiate. Anything that helps express the semantioc diference and express these mappings (as good).

Gilles: the difference is, you have a learning resource, it has different langauges it could be, different formasts it could be - all of those are different resources, with relations between them. Variants. Suppose you have a community that is blind. If you try to centralize them, you have more and more and more. What is a contributor: could be a person, an organization, or a service. More and more. Suppose you have a person. You have first name and last name. But thst doesn't make sense in some places. Let the community define them.

David: what you have then is a frameowrk, how to create elements and how to name them. So you have a nice framwork for identifying things. But at the end, whatever profile you use, you will start to exhange these descriptions.

Gilles: yes and no. The ISO-LOM survey from 2004 - most of gthe LOM elements people do not provide information for. If you go to MERLOT and look at the LOM record. They're mostly empty. What MLR will say, a property must have a domain, and must have a range.

Mike: I've been in meetings like this for 15 years. The purpose of metadata is discovery, wouldn;t you agree? Yes, I can find more information on Google, which uses none of that.

Gilles: As a human bneing.

Mike: as a human being. Behind Google is an incredible amount of machine leaning. Metadata has this problem. We have all thiose strucdtures, but we can't match it with the need. I still can't find what I need! How do we do it more like the way Google does, full-text indexing, etc. I understand your comments about the importance of standards, but I think we're a bit myopic.

David: it's not that you use one metadata standard rather than another. Google doesn't do such a great job when it comes to finding resources. If I want to do a specific query, I'm not sure how to express this query in Google, and I'm not sure how I will gind the resource. At the moment Google doesn't do a very good job, and we will find a better resource.

When we look at what people are soing, what we are looking at, is people who are trying to derive metadata in an automatic way. We see a text in English. We track usgae, and we see some resources travel well, and others don't. We look at the resources that do travel well, and we try to understand why. We say to providers, we are interested in these reosurces. It's a web 2.0 approach - we look at tags, bookmarks, keywords, descriptors, etc. We see uses that we didn't forsee.

The key of the ILOX approach is that it's a container. We try to encapsulate metadat created by providers, but also taxonomies, ratings, etc., generated by users. We don't make any assumption on what is a useful metadata or not.

Gilles: I also agree, Google, for human beings, you will find something, but not much use for machines. I would liek to see, just another way. You have a resource, and you have an identifier. You can have as many identifiers as you want, you can go to the international identifiers union and get a unique ID for free. I would like to ask Google, this record I want to have, and then there is a graph of related resources, this resource is the blind, etc. Google could do a better job presenting links.

Comment: Good, MLR aligns with the semantic web, which is where it has to go. Now, as an abstract model, it sounds a lot like the Dublic Core abstract with the semantic web. All of those seems to align one-for-one seem to align with those movements. How does MLR differ from them?

Gilles: Global ID is the same. What we add is a ay to specify an applucation profile to get LOM-like records. Also, the way this is multilingual, using a neutral identifer. All those data elements will be on an international standard.

Comment: when you talk about the URIs, are you just saying they are opaque, that they don't carry semantic data?

Gilles: they could. It's just the way you create your URIs. They have structure. You have rule sets. This is very precise, to speify the domain, the range. This is a namepace for ISO. ISO has a whole structure. We will use that.

David: The problem being if people assign different unique identifiers to the smae resource.

Gilles: You won't have that. You will have one.

Gilbert: you've seen two major initiatives to try to reconcile free and centralized descriptions of resources. These two initiatives can probably link together. ILOX can use MLR. And what W3C is puting out.

Diny: we've gone throught he same process with GEM, and 15 million resources with metadata that needs to be massaged. We have transition that store to an RDF store. So we live in both worlds now - the XML and the RDF world. The blending is not only attainable, but we're doing it.
Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • Blogs in Education
    Submission for a forthcoming STRIDE handbook for The Indira Gandhi National Open University (IGNOU). See related handbooks here . What is a ...
  • Learning and Performance Support Systems
    This post is to introduce you to our Learning and Performance Support Systems program, a new $19 million 5-year initiative at the National R...
  • E-Learning: Générations
    ( English version ) Ces dernières années, j'ai travaillé sur deux grands concepts: d'abord, la théorie de l'apprentissage ...
  • E-Learning Generations
    ( version française ) In recent years I have been working on two major concepts: first, the connectivist theory of online learning, wh...
  • Open Educational Resources: A Definition
    The Definition Open educational resources are materials used to support education that may be freely accessed, reused, modified and shared b...
  • McLuhan - Understanding Media - Summary of Chapters 11-14
    My contribution to the Understanding Media Reading Group Chapter 11 McLuhan writes, in Chapter 11 of Understanding Media, that "The mys...
  • TTI Vanguard Conference Notes - 4
    Erin McKean, Wordnik The language is the Dictionary If you took the language, and you got rid of the dictionary, what would be left would be...
  • Progressive Taxation and Prosperity
    Responding to Justin Fox, editorial director of the Harvard Business Review Group, How big should a government be? in the Harvard Business ...
  • Bob Dylan in Moncton
  • International MOOCs Past and Present
    OpenLearning.com , a venture born out of the University of New South Wales ( UNSW ) in Sydney, Australia. Starting this week, you can begin ...

Categories

  • #change11
  • Connectivism
  • http://www.blogger.com/img/gl.link.gif
  • Shakespeare

Blog Archive

  • ►  2013 (68)
    • ►  December (1)
    • ►  November (5)
    • ►  October (6)
    • ►  September (7)
    • ►  July (3)
    • ►  June (5)
    • ►  May (6)
    • ►  April (18)
    • ►  March (8)
    • ►  February (2)
    • ►  January (7)
  • ►  2012 (56)
    • ►  December (3)
    • ►  November (7)
    • ►  October (7)
    • ►  September (7)
    • ►  August (2)
    • ►  July (2)
    • ►  June (3)
    • ►  May (1)
    • ►  April (5)
    • ►  March (6)
    • ►  February (6)
    • ►  January (7)
  • ►  2011 (86)
    • ►  December (7)
    • ►  November (11)
    • ►  October (8)
    • ►  September (6)
    • ►  August (1)
    • ►  July (8)
    • ►  June (7)
    • ►  May (10)
    • ►  April (2)
    • ►  March (4)
    • ►  February (11)
    • ►  January (11)
  • ►  2010 (108)
    • ►  December (9)
    • ►  November (9)
    • ►  October (12)
    • ►  September (4)
    • ►  August (6)
    • ►  July (10)
    • ►  June (9)
    • ►  May (9)
    • ►  April (9)
    • ►  March (12)
    • ►  February (9)
    • ►  January (10)
  • ▼  2009 (85)
    • ►  December (3)
    • ►  October (8)
    • ►  September (7)
    • ►  August (4)
    • ▼  July (15)
      • No Apology Needed
      • Your Pension Awaits...
      • Correction
      • The DNC Kindle Plan
      • IMS Curriculum Standards Workshop
      • Developer workshop - CCv1.1 and basic LTI v1.0
      • IMS Learning Tools Interoperability
      • An Overview of Common Cartridge
      • The Future IMS Learning Design
      • Learning Design Tools (Demonstrations)
      • Standards for Pedagogically Relevant Learning Envi...
      • Whatever
      • Learning by Creating
      • What is the Appeal?
      • Innovation in Canada
    • ►  June (5)
    • ►  May (7)
    • ►  April (6)
    • ►  March (17)
    • ►  February (7)
    • ►  January (6)
  • ►  2008 (94)
    • ►  December (5)
    • ►  November (7)
    • ►  October (7)
    • ►  September (6)
    • ►  August (16)
    • ►  July (11)
    • ►  June (6)
    • ►  May (6)
    • ►  April (5)
    • ►  March (4)
    • ►  February (7)
    • ►  January (14)
  • ►  2007 (3)
    • ►  December (3)
Powered by Blogger.

About Me

Unknown
View my complete profile