Archive:Eduzendium: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Daniel Mietchen
m (preparing further documentation)
imported>Joe Quick
(examples should follow the explanation, not precede it)
Line 24: Line 24:
If you have registered with Citizendium, you can start a page for your course here. Just type the title of your course in this inputbox, and a suite of course pages will be prepared automagically when you press the button and follow the instructions.
If you have registered with Citizendium, you can start a page for your course here. Just type the title of your course in this inputbox, and a suite of course pages will be prepared automagically when you press the button and follow the instructions.
-->
-->
==Some Citizendium articles that were started in the framework of Eduzendium==
{{ptcl-message}}
{| class="wikitable" style="margin: 1em auto 1em auto; padding: 6px; background:#fafafa";align="left"
|-
|{{ptcl|Music perception}}
|-
|{{ptcl|Speech Recognition}}
|-
|{{ptcl|Mashup}}
|}


==What does Eduzendium do?==
==What does Eduzendium do?==
Line 65: Line 55:


The educational benefits are plain if a student writes a general, neutral encyclopedia article on a topic, in addition to an opinionated paper about some special aspect of the topic.
The educational benefits are plain if a student writes a general, neutral encyclopedia article on a topic, in addition to an opinionated paper about some special aspect of the topic.
==Some Citizendium articles that were started in the framework of Eduzendium==
{{ptcl-message}}
{| class="wikitable" style="margin: 1em auto 1em auto; padding: 6px; background:#fafafa";align="left"
|-
|{{ptcl|Music perception}}
|-
|{{ptcl|Speech Recognition}}
|-
|{{ptcl|Mashup}}
|}


==References==
==References==

Revision as of 09:34, 7 August 2009


How to join

For more specific details about recruitment and specific mechanisms and utilities for collaboration please go to the dedicated Eduzendium Recruitment Page. The page includes the list of classes associated with us and instructions about signing up.

Operational details

How to categorize your pages, how to add templates to the page, how to register and retrieve passwords, etc.

See also

  • A list of courses already integrated in Citizendium
Eduzendium instructors discuss their experiences here.

Eduzendium[1] is a program in which the Citizendium partners with university programs throughout the world to create high-quality, English language entries for the Citizendium.


What does Eduzendium do?

The Citizendium invites university instructors to include the crafting of a Citizendium article as an assignment.

Our project is open for collaborative educational and knowledge generation initiatives with higher education institutions. We strongly believe in the necessity of inviting experts of all kinds to help us build our repository of knowledge.

A distinct approach in this context is our policy of inviting the professors that teach and the students enrolled in advanced courses of the foundational/"fundamentals of" sort to help us seed or build up our entries with high-quality, clearly-argued and -written content. A pilot program involved major universities in the United States and abroad in late 2007, with good success. We hope the program will extend throughout universities in the English-speaking world.

Philosophically, we believe that the individuals who struggle with the meaning of fundamental concepts on a daily basis make excellent authors and editors for entries on those concepts. Advanced foundational courses are an ideal site for recruiting such authors and editors because their primary goal is to redefine and communicate for each generation the meaning of the basic and essential issues of our knowledge world. Furthermore, the activity of these seminars is often directed at producing short and insightful papers about some basic concepts which might or might not be later transformed into more "formal" publications. We believe that opening up the Citizendium to collaborative work on specific topic to students and their professors offers them the opportunity to take their work to another, more socially consequential level, which enhances the educational process on the one hand, while helping the Citizendium to build its socially involved and expert friendly knowledge environment, on the other hand.

In brief, we encourage faculty to use the Citizendium as a platform for their students to write public entries about key terms pertaining to a number of disciplines.

The collaborative process

In inviting the academic community to join us we are aware that we will be successful only to the degree we offer educators and students the opportunity to do what they ought to be doing: teach or learn in an efficient way, with the added excitement, feedback, real life rewards of being part of the Citizendium. We are aware that the primary goal of the education process in academia is to transmit useful knowledge and to train students for success. The Eduzendium program is designed to be extremely flexible and adaptable to the needs of each professor and seminar member. It includes an array of possible collaborative arrangements and the actual editorial process will be shaped according to each seminar's policies.

A very simple and direct collaboration would be where the professor would take the students to sign up on the Citizendium and perform a certain amount of work or to initiate and actively collaborate on a specific entry. In other situations the professors can charge specific students to write specific entries, which can be evaluated and edited for content and style individually. Editorial changes can be operated by the professor, by a team designated by the professor or by his or her entire class. This can be done using our wiki platform, in which case the topic can be reserved and closed to public access for a limited period of time. (You must ask, however, and make your intentions very clear.) Professors and their students can obtain access to a specific namespace or wiki page, which will be editable and even readable only by them for a period of time (typically, until the assignments are finished). Conceivably, some seminar might decide to work on their topics completely outside the Citizendium technological flow and only provide the Citizendium with the best of their finished products; that would be fine as well.

In a different scenario, the professor can assign the topics to the entire class, asking the members to work on them simultaneously and edit them during a period of time. He or she can intervene in the editorial process when and if needed. This, again, can be done inside or outside of the Citizendium process.

Finally, instructors can decide to work collaboratively on an existing topic in the public view and to assess the fruits of the collaboration through individual student reflection papers.

In those scenarios in which the class works outside the Citizendium process, or within a closed Citizendium environment (such as an ad hoc namespace), the professor or the class can look over the final product and decide if they would like to vet the product and make it into an "approved" Citizendium article. The instructor can then propose the topic to the Citizendium editors for introduction in the editorial flow. Note that it will always be possible to link to a specific version of an article, even after it has been edited. Note that professors need not approve articles; some may not be approvable.

While Citizendium management gives a wide latitude to Eduzendium participants for purposes of choosing topics, professors may be asked not to choose articles that are currently undergoing active editing by Citizendium contributors. This should still permit very wide latitude of topic choice. Indeed, many course topics may not have any articles written at all. (We would love for you to get us started!)

In essence, the Eduzendium program fosters real life conditions for collaborative intellectual projects within the participating seminars, which can result in a diversity of team (group) or individual projects. Instructors and students can get complete control over the degree and nature of the editorial process. Specifically, they can decide the nature of the assignments and the degree to which they will be completed in collaboration with other students or with the Citizendium community, the amount of work allocated to contributing Citizendium, the nature of the rewards and penalties to be used in assessing student work, and the quality standards of this work. Finally, they can decide if, how much and when their work can be officially published on Citizendium.

What are the educational benefits?

Writing a high-quality encyclopedia article about a specific topic requires, and trains, a specific sort of effort or discipline. Simply producing a suitably informative, but neutral, definition of a concept can require a great deal of thought. Crafting a jumble of facts into a coherent narrative, which the Citizendium requires, is a difficult, but rewarding and educational task. Furthermore, it practices a very useful scholarly skill to investigate and decide on what the most reliable bibliography items for an article are.

The educational benefits are plain if a student writes a general, neutral encyclopedia article on a topic, in addition to an opinionated paper about some special aspect of the topic.

Some Citizendium articles that were started in the framework of Eduzendium

Text in this section is transcluded from the respective Citizendium entries and may change when these are edited.

Developing Article Music perception: The study of the neural mechanisms involved in people perceiving rhythms, melodies, harmonies and other musical features. [e]
Processing a highly structured and complex pattern of sensory input as a unified percept of "music" is probably one of the most elaborate features of the human brain. In recent years, attempts have been made to investigate the neural substrates of music perception in the brain. Though progress has been made with the use of rather simplified musical stimuli, understanding how music is perceived and how it may elicit intense sensations is far from being understood.

Theoretical models of music perception are facing the big challenge to explain a vast variety of different aspects which are connected to music, ranging from temporal pattern analysis such as metre and rhythm analysis, over syntactic analysis, as for example processing of harmonic sequences, to more abstract concepts like semantics of music and interplay between listeners' expectations and suspense. It was tried to give some of these aspects a neural foundation which will be discussed below.

SoundEarSource separationPitchMetreRhythmLyricsMelodyHarmonyConsonance/DissonanceMusical syntaxMemoryEmotionMotor controlMeaning
A modular framework of music perception in the brain, after Koelsch et al. and Peretz et al.

Several authors have proposed a modular framework for music perception [2][3]. After Fodor, mental "modules" have to fulfil certain conditions, among the most important ones of which are the concepts of information encapsulation and domain-specificity. Information encapsulation means that a (neural) system is performing a specific information-processing task and is doing so independent of the activities of other modules. Domain-specificity means that the module is reacting only to specific aspects of a sensory modality. Fodor defines further conditions for a mental module like rapidity of operation, automaticity, neural specificity and innateness that have been debated with respect to the validity for music-processing modules.

However, there is evidence from various complementary approaches that music is processed independently from e.g. language and that there is not even a single module for music itself, but rather sub-systems for different relevant tasks. Evidence for spatial modularity comes mainly from brain lesion studies where patients show selective neurological impairments. Peretz and colleagues list several cases in a meta-study in which patients were not able to recognize musical tunes but were completely unaffected in recognizing spoken language[3]. Such "amusia" can be innate or acquired, for example after a stroke. On the other hand, there are cases of verbal agnosia where the patients can still recognize tunes and seem to have an unaffected sensation of music. Brain lesion studies also revealed selective impairments for more specialized tasks such as rhythm detection or harmonical judgements.

The idea of modularity has also been strongly supported by the use of modern brain-imaging techniques like PET and fMRI. In these studies, participants usually perform music-related tasks (detecting changes in rhythm or out-of-key notes). The obtained brain activations are then compared to a reference task, so one is able to detect brain regions which were especially active for a particular task. Using a similar paradigm, Platel and colleagues have found distinct brain regions for semantic, pitch, rhythm and timbre processing [4] .

To find out the dependencies between different neural modules, brain imaging techniques with a high temporal resolution are usually used. These are e.g. EEG and MEG which can reveal the delay between stimulus onset and the processing of specific features. These studies showed for example that pitch height is detected within 10-100 ms after stimulus onset, while irregularities in harmonic sequences elicit an enhanced brain response 200 ms after stimulus presentation[2]. Another method to investigate the information flow between the modules in the brain is TMS. In principle, also DTI or fMRI observations with causality analysis can reveal those interdependencies. (Read more...)

Developed Article Speech Recognition: The ability to recognize and understand human speech, especially when done by computers. [e]

In computer technology, Speech Recognition refers to the recognition of human speech by computers for the performance of speaker-initiated computer-generated functions (e.g., transcribing speech to text; data entry; operating electronic and mechanical devices; automated processing of telephone calls) — a main element of so-called natural language processing through computer speech technology.

Speech derives from sounds created by the human articulatory system, including the lungs, vocal cords, and tongue. Through exposure to variations in speech patterns during infancy, a child learns to recognize the same words or phrases despite different modes of pronunciation by different people— e.g., pronunciation differing in pitch, tone, emphasis, intonation pattern. The cognitive ability of the brain enables humans to achieve that remarkable capability. As of this writing (2008), we can reproduce that capability in computers only to a limited degree, but in many ways still useful.


Waveform of "I went to the store yesterday."
Spectrogram of "I went to the store yesterday."
Writing systems are ancient, going back as far as the Sumerians of 6,000 years ago. The phonograph, which allowed the analog recording and playback of speech, dates to 1877. Speech recognition had to await the development of computer, however, due to multifarious problems with the recognition of speech.

First, speech is not simply spoken text--in the same way that Miles Davis playing So What can hardly be captured by a note-for-note rendition as sheet music. What humans understand as discrete words, phrases or sentences with clear boundaries are actually delivered as a continuous stream of sounds: Iwenttothestoreyesterday, rather than I went to the store yesterday. Words can also blend, with Whaddayawa? representing What do you want?

Second, there is no one-to-one correlation between the sounds and letters. In English, there are slightly more than five vowel letters--a, e, i, o, u, and sometimes y and w. There are more than twenty different vowel sounds, though, and the exact count can vary depending on the accent of the speaker. The reverse problem also occurs, where more than one letter can represent a given sound. The letter c can have the same sound as the letter k, as in cake, or as the letter s, as in citrus.

In addition, people who speak the same language do not use the same sounds, i.e. languages vary in their phonology, or patterns of sound organization. There are different accents--the word 'water' could be pronounced watter, wadder, woader, wattah, and so on. Each person has a distinctive pitch when they speak--men typically having the lowest pitch, women and children have a higher pitch (though there is wide variation and overlap within each group.) Pronunciation is also colored by adjacent sounds, the speed at which the user is talking, and even by the user's health. Consider how pronunciation changes when a person has a cold.

Lastly, consider that not all sounds consist of meaningful speech. Regular speech is filled with interjections that do not have meaning in themselves, but serve to break up discourse and convey subtle information about the speaker's feelings or intentions: Oh, like, you know, well. There are also sounds that are a part of speech that are not considered words: er, um, uh. Coughing, sneezing, laughing, sobbing, and even hiccupping can be a part of what is spoken. And the environment adds its own noises; speech recognition is difficult even for humans in noisy places. (Read more...)

Developed Article Mashup: A data visualization created by combining data with multiple computer applications. [e]

A mashup is a complex form of data visualization. On the web, mashup often refers to an integrated application created by combining of geographical location and other information with a service such as Google maps or Microsoft Virtual Earth. The term has achieved widespread usage in describing this kind of web application since Google introduced its public Google Maps API[5] in 2005. Though not restricted to the web, mashups have become an increasingly popular internet paradigm, leading to the creation of a variety of web based mashups. Tim O'Reilly lists Mashups as one of the Web 2.0 technologies. [6].

An example mashup: Housing Maps.


Before the availability of the Google maps API, mashup-like applications were being developed mainly with proprietary, complex geographic information systems (GIS) software packages. Such GIS applications have been available commercially since the 1980's, but it is only since the early 2000's that non-computer-experts have had the tools that allowed such combinations of maps and user-specific data to proliferate on the web. Mashups that do not use spatial or mapping data are also possible, but the mapping application is likely the first kind that comes to mind when one says "mashup" in the context of the world wide web.


Mashups are a convergent technology of sorts. Convergence of communications is a recognition that a variety of communications can run over the same Internet Protocol-based infrastructure, without building a separate infrastructure for each service. From the standpoint of communications engineers convergence is not necessarily about the user interface or the merging of technologies. That may be a beneficial side effect, but it is not the focus of the groups concerned with convergence, such as the Multimedia Forum. To a communications engineer, mashups are not clearly distinguished from a multi-windowed interface, or even a structured dashboard, presenting multiple services to the end user.

Thanks to Google Maps, Internet mashups have become popular in recent years; however the concept of mashups has been around for a long time in a context completely unfamiliar to typical Internet engineers. Before internet mashups became popular, mashups referred to music. Music mashups are the fusion of two or more songs by overlaying their tunes and lyrics to form a new song. They have been around since the beginning of recorded music. Before this was a popular buzzword, this was called multi track recording and rerecording, where the Beatles made notable advances. Today, music mashups have been extended to incorporate videos and are still prevalent in the entertainment industry. Websites like http://www.mashup-charts.com/ are used to rate amateur music mashups. (Read more...)

References

  1. Note that eduzendium.org redirects to this page!
  2. 2.0 2.1 Koelsch, S.; Siebel, W.A. (2005). "Towards a neural basis of music perception". Trends in Cognitive Sciences 9 (12): 578-584. DOI:10.1016/j.tics.2005.10.001. Research Blogging.
  3. 3.0 3.1 Peretz, I.; Coltheart, M. (2003). "Modularity of music processing.". Nat Neurosci 6 (7): 688-91. DOI:10.1038/nn1083. Research Blogging.
  4. Platel, H.; Price, C.; Baron, J.C.; Wise, R.; Lambert, J.; Frackowiak, R.S.; Lechevalier, B.; Eustache, F. (1997). "The structural components of music perception. A functional anatomical study". Brain 120 (2): 229-243. DOI:10.1093/brain/120.2.229. Research Blogging.
  5. "Google Maps API". Google (2008). Retrieved on 2008-08-16.
  6. "Levels of the Game: The Hierarchy of Web 2.0 Applications".


Citizendium Initiatives
Eduzendium | Featured Article | Recruitment | Subpages | Core Articles | Uncategorized pages |
Requested Articles | Feedback Requests | Wanted Articles

|width=10% align=center style="background:#F5F5F5"|  |}