Archive:Eduzendium
How to join
- For more specific details about recruitment and specific mechanisms and utilities for collaboration please go to the dedicated Eduzendium Recruitment Page.
Operational details
How to categorize your pages, how to add templates to the page, how to register and retrieve passwords, etc.
See also
- A list of courses already integrated in Citizendium
- Eduzendium Testimonials — Eduzendium instructors discuss their experiences here.
Eduzendium[1] is a program in which the Citizendium partners with university programs throughout the world to create high-quality, English language entries for the Citizendium.
If you have registered with Citizendium, you can start a page for your Eduzendium course here. Just type the title of your course in this inputbox (it has to start with "CZ:", which we have filled in already), and a suite of course pages will be prepared automagically when you press the button and follow the instructions.
What does Eduzendium do?
The Citizendium invites university instructors to include the crafting of a Citizendium article as an assignment.
In brief, we encourage faculty to use the Citizendium as a platform for their students to write public entries about key terms pertaining to a number of disciplines.
The collaborative process
The Eduzendium program is designed to be extremely flexible and adaptable.
A very simple and direct collaboration would be where the professor would take the students to sign up on the Citizendium and perform a certain amount of work or to initiate and actively collaborate on a specific entry. In other situations the professors can charge specific students to write specific entries, which can be evaluated and edited for content and style individually. Editorial changes can be operated by the professor, by a team designated by the professor or by his or her entire class. This can be done using our wiki platform, in which case the topic can be reserved and closed to public access for a limited period of time. (You must ask, however, and make your intentions very clear.) Professors and their students can obtain access to a specific namespace or wiki page, which will be editable and even readable only by them for a period of time (typically, until the assignments are finished).
In essence, the Eduzendium program fosters real life conditions for collaborative intellectual projects within the participating seminars, which can result in a diversity of team (group) or individual projects. Instructors and students can get complete control over the degree and nature of the editorial process. Specifically, they can decide the nature of the assignments and the degree to which they will be completed in collaboration with other students or with the Citizendium community, the amount of work allocated to contributing Citizendium, the nature of the rewards and penalties to be used in assessing student work, and the quality standards of this work. Finally, they can decide if, how much and when their work can be officially published on Citizendium.
What are the educational benefits?
Writing a high-quality encyclopedia article about a specific topic requires, and trains, a specific sort of effort or discipline. Simply producing a suitably informative, but neutral, definition of a concept can require a great deal of thought. Crafting a jumble of facts into a coherent narrative, which the Citizendium requires, is a difficult, but rewarding and educational task. Furthermore, it practices a very useful scholarly skill to investigate and decide on what the most reliable bibliography items for an article are.
Some Citizendium articles that were started in the framework of Eduzendium
- University of Edinburgh; articles on the theme of Appetite and Obesity that were originally written by undergraduate students, working in groups of about 4 students.
- Circadian rhythms and appetite [r]: Daily variations in the regulation of food intake. [e]
- Energy balance in pregnancy and lactation [r]: Adaptations in the control of food intake and energy expenditure in different reproductive states. [e]
- Evolution of appetite regulating systems [r]: Comparisons of the mechanisms regulating food intake and energy expenditure between species. [e]
- Glucostatic theory of appetite control [r]: The theory that changes in blood glucose concentrations or arteriovenous glucose differences are detected by glucoreceptors that affect energy intake. [e]
- Melanocortins and appetite [r]: The regulation of food intake through neuropeptides related to adrenocorticotropic hormone. [e]
- Stress and appetite [r]: The interactions between the hypothalamo-pituitary-adrenal axis and the regulation of food intake. [e]
- Food reward [r]: The brain mechanisms involved in reinforcing feeding behaviour. [e]
- Gut-brain signalling [r]: The interaction between the gastrointestinal tract and the brain. [e]
- Diabesity [r]: A term referring to the intricate relationship between type 2 diabetes and obesity. [e]
- Genetics of obesity [r]: The evidence for a genetic component to obesity in humans. [e]
- Bariatric surgery [r]: The surgical removal of body fat. [e]
- Drug treatments for obesity [r]: Treatments of obesity that are based on drugs. [e]
- Exercise and body weight [r]: Correlation between physical activity and the body mass index. [e]
- Health consequences of obesity [r]: Long-term effects of obesity on health. [e]
Others
| Music perception: The study of the neural mechanisms involved in people perceiving rhythms, melodies, harmonies and other musical features. [e]
Processing a highly structured and complex pattern of sensory input as a unified percept of "music" is probably one of the most elaborate features of the human brain. In recent years, attempts have been made to investigate the neural substrates of music perception in the brain. Though progress has been made with the use of rather simplified musical stimuli, understanding how music is perceived and how it may elicit intense sensations is far from being understood.
Theoretical models of music perception are facing the big challenge to explain a vast variety of different aspects which are connected to music, ranging from temporal pattern analysis such as metre and rhythm analysis, over syntactic analysis, as for example processing of harmonic sequences, to more abstract concepts like semantics of music and interplay between listeners' expectations and suspense. It was tried to give some of these aspects a neural foundation which will be discussed below.
Several authors have proposed a modular framework for music perception [2][3]. After Fodor, mental "modules" have to fulfil certain conditions, among the most important ones of which are the concepts of information encapsulation and domain-specificity. Information encapsulation means that a (neural) system is performing a specific information-processing task and is doing so independent of the activities of other modules. Domain-specificity means that the module is reacting only to specific aspects of a sensory modality. Fodor defines further conditions for a mental module like rapidity of operation, automaticity, neural specificity and innateness that have been debated with respect to the validity for music-processing modules.
However, there is evidence from various complementary approaches that music is processed independently from e.g. language and that there is not even a single module for music itself, but rather sub-systems for different relevant tasks. Evidence for spatial modularity comes mainly from brain lesion studies where patients show selective neurological impairments. Peretz and colleagues list several cases in a meta-study in which patients were not able to recognize musical tunes but were completely unaffected in recognizing spoken language[3]. Such "amusia" can be innate or acquired, for example after a stroke. On the other hand, there are cases of verbal agnosia where the patients can still recognize tunes and seem to have an unaffected sensation of music. Brain lesion studies also revealed selective impairments for more specialized tasks such as rhythm detection or harmonical judgements.
The idea of modularity has also been strongly supported by the use of modern brain-imaging techniques like PET and fMRI. In these studies, participants usually perform music-related tasks (detecting changes in rhythm or out-of-key notes). The obtained brain activations are then compared to a reference task, so one is able to detect brain regions which were especially active for a particular task. Using a similar paradigm, Platel and colleagues have found distinct brain regions for semantic, pitch, rhythm and timbre processing [4] .
To find out the dependencies between different neural modules, brain imaging techniques with a high temporal resolution are usually used. These are e.g. EEG and MEG which can reveal the delay between stimulus onset and the processing of specific features. These studies showed for example that pitch height is detected within 10-100 ms after stimulus onset, while irregularities in harmonic sequences elicit an enhanced brain response 200 ms after stimulus presentation[2]. Another method to investigate the information flow between the modules in the brain is TMS. In principle, also DTI or fMRI observations with causality analysis can reveal those interdependencies. (Read more...) |-
| Speech Recognition: The ability to recognize and understand human speech, especially when done by computers. [e]
In computer technology, Speech Recognition refers to the recognition of human speech by computers for the performance of speaker-initiated computer-generated functions (e.g., transcribing speech to text; data entry; operating electronic and mechanical devices; automated processing of telephone calls) — a main element of so-called natural language processing through computer speech technology.
Speech derives from sounds created by the human articulatory system, including the lungs, vocal cords, and tongue. Through exposure to variations in speech patterns during infancy, a child learns to recognize the same words or phrases despite different modes of pronunciation by different people— e.g., pronunciation differing in pitch, tone, emphasis, intonation pattern. The cognitive ability of the brain enables humans to achieve that remarkable capability. As of this writing (2008), we can reproduce that capability in computers only to a limited degree, but in many ways still useful.
Writing systems are ancient, going back as far as the Sumerians of 6,000 years ago. The phonograph, which allowed the analog recording and playback of speech, dates to 1877. Speech recognition had to await the development of computer, however, due to multifarious problems with the recognition of speech.
First, speech is not simply spoken text--in the same way that Miles Davis playing So What can hardly be captured by a note-for-note rendition as sheet music. What humans understand as discrete words, phrases or sentences with clear boundaries are actually delivered as a continuous stream of sounds: Iwenttothestoreyesterday, rather than I went to the store yesterday. Words can also blend, with Whaddayawa? representing What do you want?
Second, there is no one-to-one correlation between the sounds and letters. In English, there are slightly more than five vowel letters--a, e, i, o, u, and sometimes y and w. There are more than twenty different vowel sounds, though, and the exact count can vary depending on the accent of the speaker. The reverse problem also occurs, where more than one letter can represent a given sound. The letter c can have the same sound as the letter k, as in cake, or as the letter s, as in citrus.
In addition, people who speak the same language do not use the same sounds, i.e. languages vary in their phonology, or patterns of sound organization. There are different accents--the word 'water' could be pronounced watter, wadder, woader, wattah, and so on. Each person has a distinctive pitch when they speak--men typically having the lowest pitch, women and children have a higher pitch (though there is wide variation and overlap within each group.) Pronunciation is also colored by adjacent sounds, the speed at which the user is talking, and even by the user's health. Consider how pronunciation changes when a person has a cold.
Lastly, consider that not all sounds consist of meaningful speech. Regular speech is filled with interjections that do not have meaning in themselves, but serve to break up discourse and convey subtle information about the speaker's feelings or intentions: Oh, like, you know, well. There are also sounds that are a part of speech that are not considered words: er, um, uh. Coughing, sneezing, laughing, sobbing, and even hiccupping can be a part of what is spoken. And the environment adds its own noises; speech recognition is difficult even for humans in noisy places. (Read more...) |-
| Mashup: A data visualization created by combining data with multiple computer applications. [e]
A mashup is a complex form of data visualization. On the web, mashup often refers to an integrated application created by combining of geographical location and other information with a service such as Google maps or Microsoft Virtual Earth. The term has achieved widespread usage in describing this kind of web application since Google introduced its public Google Maps API[5] in 2005. Though not restricted to the web, mashups have become an increasingly popular internet paradigm, leading to the creation of a variety of web based mashups. Tim O'Reilly lists Mashups as one of the Web 2.0 technologies. [6].
Before the availability of the Google maps API, mashup-like applications were being developed mainly with proprietary, complex geographic information systems (GIS) software packages. Such GIS applications have been available commercially since the 1980's, but it is only since the early 2000's that non-computer-experts have had the tools that allowed such combinations of maps and user-specific data to proliferate on the web. Mashups that do not use spatial or mapping data are also possible, but the mapping application is likely the first kind that comes to mind when one says "mashup" in the context of the world wide web.
Mashups are a convergent technology of sorts. Convergence of communications is a recognition that a variety of communications can run over the same Internet Protocol-based infrastructure, without building a separate infrastructure for each service. From the standpoint of communications engineers convergence is not necessarily about the user interface or the merging of technologies. That may be a beneficial side effect, but it is not the focus of the groups concerned with convergence, such as the Multimedia Forum. To a communications engineer, mashups are not clearly distinguished from a multi-windowed interface, or even a structured dashboard, presenting multiple services to the end user.
Thanks to Google Maps, Internet mashups have become popular in recent years; however the concept of mashups has been around for a long time in a context completely unfamiliar to typical Internet engineers. Before internet mashups became popular, mashups referred to music. Music mashups are the fusion of two or more songs by overlaying their tunes and lyrics to form a new song. They have been around since the beginning of recorded music. Before this was a popular buzzword, this was called multi track recording and rerecording, where the Beatles made notable advances. Today, music mashups have been extended to incorporate videos and are still prevalent in the entertainment industry. Websites like http://www.mashup-charts.com/ are used to rate amateur music mashups. (Read more...) |}
References
- ↑ Note that eduzendium.org redirects to this page!
- ↑ 2.0 2.1 Koelsch, S.; Siebel, W.A. (2005). "Towards a neural basis of music perception". Trends in Cognitive Sciences 9 (12): 578-584. DOI:10.1016/j.tics.2005.10.001. Research Blogging.
- ↑ 3.0 3.1 Peretz, I.; Coltheart, M. (2003). "Modularity of music processing.". Nat Neurosci 6 (7): 688-91. DOI:10.1038/nn1083. Research Blogging.
- ↑ Platel, H.; Price, C.; Baron, J.C.; Wise, R.; Lambert, J.; Frackowiak, R.S.; Lechevalier, B.; Eustache, F. (1997). "The structural components of music perception. A functional anatomical study". Brain 120 (2): 229-243. DOI:10.1093/brain/120.2.229. Research Blogging.
- ↑ "Google Maps API". Google (2008). Retrieved on 2008-08-16.
- ↑ "Levels of the Game: The Hierarchy of Web 2.0 Applications".
Citizendium Initiatives | ||
---|---|---|
Eduzendium | Featured Article | Recruitment | Subpages | Core Articles | Uncategorized pages | Requested Articles | Feedback Requests | Wanted Articles |
|width=10% align=center style="background:#F5F5F5"| |}