Still Images: Their Power and Relationship with Other Media

"The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift." Albert Einstein

"Our educational system, as well as science in general, tends to neglect the nonverbal form of intellect. What it comes down to is that modern society discriminates against the right hemisphere." Roger Sperry, 1973

"...We've moved from an economy built on people's backs to an economy built on people's left brains to what is emerging today: an economy and society built more and more on people's right brains." Daniel Pink, 2006

Multimedia Toolbench Chapter

In order to better understand our capacity to compose and comprehend, it is important to consider the dual-encoding nature of how our brain functions. Visual thinking is one of the composition skills deeply impacted by a left-brain/right-brain duality. The historical impact of our primary technology for live and archived communication, that is language and text, however may have been to favor one type of thought to such a degree that it suppressed other important intellectual capacity, including visual thinking. The economic restrictions and technical requirements of publishing articles and books on paper further contributed to a discrimination towards certain types of mental capacity. Mioduser, Nachmias and Forkosh-Baruch (2008) note a distinct gap between an inner world of print based educational and academic systems and an outer world of a media rich culture and a growing Web-based knowledge society. This chapter will examine ways that digital communication, supported by a wide range of software tools, can not only liberate the intellectual nature of image processing, but enhance many other forms of thought, providing new routes for educational practice.

graphic of brain's hemispheresAs with the illustration of the brain on the left (Luciano, 2008) which should be clicked for a larger view and better study, there are some generalizations that have been reached about the specialization of the two sides of the brain, also called lateralization of brain function. The right hemisphere specializes in objects, events, synthesis of the present moment, musicality, emotional overtones of speech and generation and in many forms of non-verbal creativity. photo of 2 halves of brain open to see in-between in pair of handsThe left hemisphere has specialized more in perceiving and generating language, time sequence, categories, details, consciousness and more (Coleman, 2001), and this also includes the processing of numeric manipulation and mathematical concepts in the left temporo-parietal area (Levy, Reis & Grafman, 1999). Other functionality including emotions, sound localization and arithmetic are under more bilateral control (Dehaene et al., 1999; Dehaene et al., 2003). These two hemisphere are in constant communication through the some 300 million axonal fibers of the corpus colossum, yet they have distinct characteristics and personalities (Taylor, 2008). Taylor characterized the left brain as a serial processor and the right brain a parallel processor. The photo on the right of the front view of an actual human brain (clickable for larger view) shows the interior connecting corpus colossum area (Taylor, 2008). To say that the sides of the brain are specialized does not mean exclusivity, but rather that one hemisphere sometimes helps with other side in ways not yet understood. For example, scenes and faces are processed simultaneously on both sides (Golby et al, 2001), though we cannot examine in what way that information is being processed. This duality has implications for thinking visually. To fully activate our visual capacity to think, compose and edit would appear to require some degree of quieting of the left hemisphere for the right to excel when we are awake. Sleeping and dreaming may play and important role in allowing the right hemisphere time to more fully function.

There are important exceptions to the generalizations about the hemispheres. Though about 95% of right handed people have their specialized language functions in the left brain, about 20-30% of left-handed people have their language areas in the right hemisphere or have bilateral functioning (Holder, 2005; Taylor, 1990). Of the left-handed thinkers, studies range from 61% (Taylor, 1990) to 73% (Knecht et al, 2000) that have left hemisphere dominance, which still indicates some language processing on both sides.To refer to brain lobe dominance may be a more accurate reference but the short-hand of left-right brain thinking may be adequate if we acknowledge the imperfect nature of the generalization. Hemisphere specialization has a number of implications for education as well. For example, reading scholars have continued Pavio's research (1986) on vocabulary development through dual-encoding theory.

taylor on TED stageHow can we best approach visual thinking? Much of what we know about the function of various areas of the brain has come from experience with areas that stop functioning. (Sacks, 1986, 2008). For a striking story and explanation of how this brain duality also impacts us personally, socially and culturally, click the image on the right or use this link to Dr. Taylor's story of her life's work and stroke. How does the relationship between left hemisphere text and right hemisphere image work in processing the stages of problem solving?

The "gift" Einstein was referring to in his famous quote at the beginning of this essay is the right brain's creativity. Einstein revolutionary thesis in physics emerged from playing imaginatively with the idea of riding on a beam of light. Taylor's intution developed through imagery that emerged in the silence of the left hemisphere. The process of articulating and verifying occurred after the creative idea had emerged. Is one more important that the other? Is either sufficient unto itself? With these thoughts in mind, we will explore further problems and issues with multimedia communication in the context of two-dimensional images.

What methods are in use for activating non-language thinking in the right brain?

collageWhich comes first when a new idea comes to mind, the words or the picture? What is the relationship between text and image? How does this relationship impact the learning process and major subject areas?

an image of words and textWhat impact should this have on future curriculum development? View how text is used with these images and in comparison study one of these photo galleries.

Culturally we commonly use two-dimensional images in a wide variety of ways. Another term for this topic is "still image" to contrast it with image movement in animation and television. It is difficult to imagine any instructional event in any hour of the school day without the display of a still image being usefully involved or image composition being employed.  But reading an image is just one side of the coin. Even more neglected is the perception shift of vision required to draw. It is the ability to ever more fluidly shift to different perspective that is at the heart of creativity in all fields of study and endeavors in life.

What methods might be used to accent or heighten image composition and thinking? In her book Drawing on the Right Side of the Brain (1979, 1989, 1999), Edwards noted the need for exercises and activities which quieted the language activity of the brain and stimulated the visual in order to make a mental shift to see as an artist sees. The shift is similar to the feeling of image synthesis when the hidden images of a stereogram emerge into view (click author names for examples: Walde, 1993; Ventures, 2008). In fact, it is helpful to try meditative activity to reduce verbal thinking in order for the stereogram images to appear. This approach may be useful for beginners, but this does not imply that both hemispheres cannot be highly trained in many perspectives and function simultaneously, one famous example from history being Leonardo Da Vinci. This exceptional mind clearly had exceptional educational experiences that stimulated a wide range of perspectives. As education is able to teach and provide a greater range of composition skills that are stimulated by the nature of composition on the Web, it holds out the possibility of a new wave Leonardo's with a new level of diverse ability.

How might Web design and communication better balance the relative capacity of the two hemispheres of our brain, or more importantly the more suppressed forms of intellectual activity? The chapter focuses on one element of that answer. The walls of our classrooms, halls, and homes as well as textbooks, magazines and our children's books are covered with still images. Where do they come from? What role can computers play in their creation? Why do we value images so highly yet attend to them so lightly? One of the implications for Web design is that the juxtaposition of text with image may contribute to the suppression of visual thought for which digital systems provide some effective options, which will be considered further under the heading below of Output.

Image Processing

A good beginning point is an understanding the image composition process, of working through the basic steps in the creation and sharing of images.  Three stages are useful here: input, manipulation, and output. These cannot be considered totally distinct stages however, for input and manipulation are controlled by the purposes of output or audience.

Input

In a sense, composition begins with the very selection of input tools and the emergence of the idea of the image you wish to have or see. Composition is framed by purpose. For example, to aim a camera is to position a virtual rectangle around a perspective in order to capture it. Whether sketching a picture on paper or screen, this decision about what to frame is a highly creative and inventive stage of the process. Why did you choose that view? What conscious or unconscious forces make that view seem relevant to you or others? Input devices represent a technology that has some way to place the images they capture into a computer readable (digitized) format.

Input devices for a two-dimensional image include: a hand as it paints or draws; cameras, both analog and digital which can be connected to a variety of lenses including telescopes and microscopes; the computer mouse; graphic tablets; scanners; CD and DVD players with clip art; cell phone cameras; videotape and videodisc players and video cameras including analog and digital.  Engineers might use large drafting table size graphic tablets which cost several thousand dollars.

The quality of the output also depends on the quality of the input.  The more pixels per space that can be captured, the greater the accuracy in reproduction. In turn, pixels vary in terms of the depth of color that they can display.

Input Tools

There are many choices. The more choices studied and used, the more powerful your use and composition of imagery becomes.

  • draw by hand
  • pressure sensitive graphic tablets (Wacom tablets and pens; examples )
  • a painting or sketch drawn on paper (then scanned into the computer)
  • Cell phone cameras


  •  

    Shooting an Image

    See Jerry Birn's excellent discussion of three point lighting, a foundational set of principles for many fields and many types of compositions. Whether lighting an object in a 3D software application or working in the real world, the principles of three point lighting or the same. Every time you work with an image, whether using someone else's work or composing your own, the composer should be conscious of key, fill and back (rim) light. When in possession of lighting kit or studio light sources, the key light and the fill light or fill lights come from the front side of the subject. The fill light should be around half or less of the intensity of the key light, while the intensity of the back light or back lights should be whatever necessary to achieve a rim of highlights on the top edge of the subject to provide a strong sense of the three dimensional depth of the subject. The boundaries of these principles are then pushed to achieve different effects for artistic and aesthetic purposes. When not in control of multiple light sources, use the natural light and the focus of the camera to works towards a similar effect.

    The subject of lighting should not be taken lightly. If one explanation of three point lighting such as Mr. Birn's does not work for you, there are many others. See Google search for 3 point lighting. Numerous books and articles are devoted to just this subject. The Library of Congress devotes many terms to this subject, several of which are relevant to this discussion and important to keep in mind as need for further knowledge grows: photography lighting; portrait photography lighting; stage lighting video recording lighting. The general subjects of lighting, architectural and decorative and interior lighting become important when working with scenes for 2D and 3D imagery and for animation scenes. Entire careers in the industries of film, television and theater are built around just the work of lighting a scene.

    This triangulation of a subject with contrasting lights, with different intensities and different angles of light, can also be seen in a more philosophical and theoretical sense. From it one could conclude one shoul not expect one source of information to fully reveal the truth of a subject. This makes it important to seek multiple perspectives of different intensities. Further, this means that there is always another perspective that could "light" a subject in a different way to reveal new truths. This thinking is an important part of the thinking in discussions of social diversity such as racial, economic and cultural diversity.

    This thinking will become increasingly important as multimedia composers become more capable of working in multiple forms of composition. From this perspective, media composers can think of a seven point "lighting" system, seeking unity of purpose (the real meaning of unimedia) when presenting and providing interaction with text, still images, audio, video, 2d animation, 3D animation, and live information through electronic remote control and sensors. The goal should be to create a unified whole in which all angles of information support each other. The very act of blending different types of media forms another way to "light" or think about the subject being explained or taught. The question of accent and balance is as important to teaching and learning as it is to photography.

    Should uni-media composition also appropriate the thinking of lighting experts and repurpose the terms of key, fill and back lighting? That is, should one form of information presentation clearly dominate, forming the equivalent of a "key light" such as explaining a subject primarily with photography, then be supported by "fill lights" which might include text, 2D animation, music and oral narrative, and then use "back" lights to "rim" or bring highlighted depth to narrow parts of the subject which might, for example,  include 3D animation, remote sensors and viewer feedback or commentary? Given the interactive nature of the computer environment, should not the input of the viewer be considered an equally valid and important part of the overall composition that one creates?

    Does a unimedia composer have a seven, eight point or more "lighting" system? With media, how many points of intellectual light are there? What is the rationale for the defense of your answer?

    Manipulation

    There are a number of fine computer tools for the creation and manipulation of images. At a very basic level is the Paint program which is included free with the Windows operating system.  Microsoft Office 95, 97 and 98 included some basic draw features in its Word processor. More powerful and school affordable are the draw and paint tools in Appleworks (formerly Clarisworks), a program that runs on both Macintosh and Windows computers. The 5.0 and later versions of Appleworks save files in a wide variety of formats of use for both print and web display. Professional users, however, work with a series of programs that vary in their emphasis of tools for painting, drawing and image manipulation. The features of even these professional programs are well within the range of the ability of K-12 students once they have had some initial training. This first level of this knowledge base requires understanding of a few concepts.

    Building an Image

    Manipulation Tools

    Output

    Output of image processing goes to two primary forms in the 21st century, paper and screen displays.

    Printing Output

    The type of output chosen depends on the way the audience will view the work. Printers may produce work from the size of a sheet of paper to 50 foot billboards. Printers print in black and white and in color and on a variety of different types and thickness of paper. Other output devices include: slide shows (both analog and digital); projection systems to large screen and video walls of dozens of large TV screens; color plotters, cutting knives that cut floor tile, sign printers, overhead transparencies and a wide range of posters that can be scaled to the size of highway posters and larger. Scanners can reproduce work in a wide range of resolutions whose output may vary from 72 dpi (dots per inch)  to 4000x3000 dpi. PhotoCD technology promoted by Kodak digitizes images and puts them on CDs in a variety of different pixel densities and graphic formats including JPEG. Different software applications specialize in display for computer screens and computer projection systems. On non-networked computers, the Powerpoint application is perhaps the most commonly used for image display. Among networked computers, browsers for the World Wide Web system on the Internet are most common application for the display of information.

    Web Output

    The Internet and its WWW (World Wide Web) have introduced a whole new set of image output considerations: blind users; speed; within-image control (image maps); limited color selection and methods for highlighting non-verbal communication. How might Web design and communication better balance the relative capacity of the two hemispheres of our brain, or more importantly, enhance the more suppressed forms of intellectual activity? A few options exist. The use of classroom projection systems can also further enhance non-verbal thought.
    1. Organize a web article into a series of linked files, so that screens in the sequence are dominated or exclusively given over to non-verbal forms of expression.
    2. Create links in the text that put non-verbal elements in pop-overs that cover the sequential text below, requiring their observation and removal before the text sequence can continue.
    3. Use frame pages so that the different elements of the frame can be hidden by the viewer to shift focus from verbal to other forms of communication.

    Blind

    For users of the Internet who are blind, a special technique is employed. An HTML special tag can be used to create a textual description of an inserted image that is hidden from sighted users but displays in text browsers, the ALT command. In Netscape Composer and other web editors, the commands that allow image insertion also have a special field to enter the descriptive text for the ALT (alternative) command. The ALT command then holds text that describes the image and if the browser is a text only browser, the text is transmitted, not the image. The text reader of the blind user then speaks this text or perhaps converts it to braille for a braille reader.

    Speed

    Because it takes much longer to transmit images than text, the WWW system require special handling of images. Another meaning of WWW is World Wide Wait! Multimedia elements such as still images are a major source of the wait. In the years ahead, fiber optic lines will make concern about waiting obsolete, but for now most users of the Internet are using standard telephone modems. If several images are included, one page could take several minutes of transmission on such modems before the page is complete. The smaller the image, the less data it contains and therefore the faster it transmits. Slow loading pages chase newcomers away from further exploration of a web site.

    The first strategy is to make the image smaller by scaling or cropping or both in a way that makes the file size of the image smaller. This is not the same thing as using small height and width commands when inserting an image in a web page; this has no impact on the size of the file being transmitted. The more images that will appear on one web page, the more the file size of the image needs to be reduced to speed up loading of the web page. In addition to making the image smaller so that it contains less data, different compression formats are used to optimize the transmission speed of the image. Pay close attention to the file size of your images to gain an appreciation of what happens as an image moves through different stages for different forms of output. A second strategy is to use the concept of thumbnails in which a set of very small images is displayed. Each small each clicks to a much larger version of the image. This could also be graduated in which each image links to an ever larger version of the image. Search the web for "thumbnails" to find examples and further information.

    Hotspots can connect parts of a single image to other files. The underlying structure of every web page is HTML or hypertext markup language. HTML code allows specific parts of an image to be designated as a link point, so that a click of the mouse can connect with and force the display of another file.
    Study this example.  Explore Sabrina's Web Site through the use of a clickable web site map.
    The web site map clicks or links to other web pages that make up Sabrina's site. These clickable areas are referred to as hotspots. The more technical term is image map. There are many examples scattered across the Internet: The most efficient approaches to image map creation are built into the better commercial web editors. Specialized and free approaches are also available.

    Commercial Software

    How To Make Client-Side Image Maps  Even though current personal computers can display millions of colors, a significant number of older computers still in use around the world can only display 256 colors. This will be called the 256 color palette. Macintosh and Windows computers running in 256 color mode share 216 colors while varying on the remaining 40. By not using the 40 variable colors, web designers have a palette that can be used to create images without fear of an image drawn on one platform being hurt by a missing or poorly substituted color on another platform. This also applies to background colors and the color of text. Reducing the number of colors used in an image also allows better compression of the file and therefore faster transmission over the Net.

    If a computer cannot find a color that it needs it can dither the color. That is, it takes pixels of different colors and puts them next to each other so that they blend in our eye to form a new color that should be close to the color that is missing. Sometimes dithering fails spectacularly. In these cases the color of the text cannot be easily discerned from the background color. Staying with the Browser-Safe Palette eliminates this problem.

    As the quality of computers in homes and offices are upgraded and the speed of the Net increases, the importance of this issue will fade and disappear. In an increasing number of cases the problem has disappeared. If the image creator knows that the images are designated for use by a particular institution that uses more recent computers with a broadband network that runs at 10 MB ethernet or higher, the problem is gone. In the meantime, there are a variety of ways to express what colors can be safely used on the large number of computers that can only display 256 colors:

  • Computer Color Matters (this site is also provides a wide range of excellent information on the nature and use of color)
  • Web or Browser Safe Colors (understandable by most computer users;
  • Colors and RGB Hex Numbers (understandable by graphic designers)
  • No Dither Netscape Color Palette (understandable by those working in HTML code)
  • What is Gamma? (not just an issue for engineers)
  • The future of color on the World Wide Web
  • Multimedia - Text-Merge

    We have seen many factors that must be considered during the output stage of image processing. Output, however, is far more than technique.  Ultimately it is about the image designer's intention. Why is a particular image chosen? Which images have what effects? Once these questions have been answered, an image is created and a new problem emerges. The relationship between the media of text and the media of the still image must be considered. This might be called the issue of text-merger. The joining of image and text is one form of multi-media. What strategies can we bring to determining the best balance or the relationship between image and text? How does the nature of the Internet change these strategies? In later chapters, it will be useful to consider whether this concept can be applied to other media as well.

    A classic example of image-text balance is in articles by National Geographic. Their articles often begin with a four page foldout, then facing pages of  two page spreads. These images start with text of large fonts and few words that are placed over an image. Then later images use a few more words in a smaller font over the image. These images are followed by  images covering a full page, then half page and quarter page images which include several sentences of description under or next to each picture. As images cover less than a full page, standard size text fills the remaining space, until near the end of the article their full pages of text intermingle with pages with quarter and half page images. That is, they begin with an opening that is heavy with image, then ends heavy with text, with a gradient between these two points. This concept can be expressed by this image.
     

    strategy


    Electronic slide show tools can easily use the same strategy, e.g., Powerpoint, Hypercard, Hyperstudio and Toolbook. Do you think this strategy is effective? Why or why not? Can you think of other strategies? Play with the two-triangle image in your mind and others approaches will occur to you.

    On the Internet, the larger the image, the slower it transmits. An ideal page takes under 3 seconds to load.  This has forced the adoption of an image-text balance that is somewhat the reverse of print technology.
     

    82x183, a rectangle with line dividing lengthwise, word text on one side and image on other
    An opening page to a site needs a small amount of text with a very small image or an image format that transmits very quickly. Once interest has been built, the next set of pages linking from that page have more text and slightly larger images; third tier pages have even more text and/or larger images. This is an issue that will increasingly go away as the viewer's speed or access to the Internet increases. For the foreseeable future however, this will remain an important consideration.
     
    99x163
    These strategic considerations are also used in the teaching of very complex topics that take years to master, for example, reading. Two and three year olds sit in our laps "reading" picture books with us. The earliest books for children to read, early readers, are primarily image with just a few words per page. (The electronic age will soon produce "books" which will still include just a few words at the bottom, but most of the display space will include all of the other elements of multimedia beyond the static image including 3-D and video with audio.) As readers and learners become more sophisticated, the ratio of image to text changes in favor of text. This suggests that educators should weigh this strategy in all of their educational decisions. What we know about text-image relationships implies that learners of any age need more image when they first encounter new ideas. The need for something other than text such as image is proportional to the complexity and difficulty of a topic. It also implies that this lessens over time.

    First hand or direct experience provides the ultimate input. Multimedia is the first level of abstraction after direct experience. Why does this work? If we hypothesize that the brain is less a word processor and more an image processor, the initial instruction is teaching the brain a set of images for basic manipulation. As the learner becomes more sophisticated and can attend to the nuances of finer detail, the brain can use the information compression efficiency of text to stimulate and transform those images for higher levels of thought. That is, even though the text is so dense as to obliterate the image from the display area, the mind does its real and perhaps unconscious thinking in media (e.g., images, sound), not the high abstraction of words and numbers. If the fundamental image (media) capacity is not in place, throwing more text at a problem of  mis-understanding does not help if there is no image capacity to stimulate. Different words may still help if they can stimulate a deeper level of experience, but without some experience, the words have no meaning. Adult readers may go right to the text and have something to hang the text on thereby concluding that imagery and the pictures adjacent to the text are not so valuable for them. It is true that the images next to the text may be ignored by many, but the text functions for them because the necessary images (or other media forms) are already within the brain and available for use.

    In later chapters we will evaluate the degree to which this concept transfers to other media beyond the still image. This hypothesis has other implications.This would also mean that the most powerful educators are those who can readily reach back to "older" forms of learning in our developmental biology. A teacher's skill includes providing such structures for text to activate. The skill is in teaching the learner to recognize the link between the media structures embedded in long term memory and the right text. However, it is the nature of professional training that this education ends in works dense with text, works with the compressed abstractions of word and number. The danger is that the text which professionals know so well is seen as the place to start instead of the place to finish. The answer to curriculum problems is often seen as a rewrite of complex text to simpler text. If it is true that text works by stimulating deeper and older structures, then text is of marginal initial value without these deepers levels of awareness and experience. Now that computers provide so many ways and so much power to create and manipulate the missing image (and other sensations), there is both great excitement and great potential for addressing curriculum problems in new ways.

    Copyright

    Output is also more than just what you can do. It also involves consideration of what is legal and ethical. Many composers and designers have spent long hours perfecting their creations and expect remuneration and attribution for their effort. Legal issues over the use of images involves U.S. copyright law, its Fair Use provisions and some general agreements between publishers and educational institutions. There are two critical conclusions to remember. First, whether you are invoking the Fair Use Provisions for Educators or not, the source of your text or any medium must be cited. Always give attribution. Second, the Fair Use provision does not apply to open distribution across the Internet. For example, Fair Use would apply to a properly cited image or multimedia resource in a Powerpoint slide, but it would not apply to that same image on a web page that can be seen by anyone with access to the Internet. If your use is educational, and if your web distribution is to an audience that has been limited, such as a class of students, then copyrighted images may appear on web pages. This limitation might take the form of an Intranet within the school that others outside the building could not see. Web designers can also use password protection, carefully distributing their password to only the students in their course or institution. The limits of class size for such distribution have not been tested in a court of law to my knowledge. In case of doubt, contact the creator of the image or whatever form the creation takes and ask for written permission for the use that you have in mind. If it is an email message, print the message. As with any such correspondence, keep it on file.

    The Image and the Internet

    At first it would seem that the computer and the Internet's capacity to link text and image on the same page adds little to the capacity that textbook and paper technology have had for centuries. In one sense this is true. This assumes though that every author, instructor and teacher has had equal capacity for the creation and the publication of image and text. In fact, the creation of color plates has been an extremely costly, sophisticated and time consuming process. Even with excellent low cost color printers, the cost of color image handouts is high and therefore seldom used in classrooms. The Web (with its supporting technology of computer and Internet) has effectively erased the cost problem of color display for the composer. (This of course assumes that the cost of the display station has already been addressed.) The Web has also effectively obliterated the time from the creation of the image to the sharing of the image and eliminated the cost of reproduction. Once an image is completed it takes but seconds to save a file and upload it to a web server with a distribution capacity greater than the broadcast footprint of any single orbital satellite. The only remaining limitations are the resolution of computer display screens and the time that it takes to put sufficient image creation ability in the hands of the author. As the experience of working with these chapters increases, you will have a deeper appreciation for just how long this takes.
     

    Educational Implications

    Computer composition software and the web have dramatically and irreversibly changed the nature of composed communication. Capable software for image composition and editing comes included in the price of current computers or is available for free downloading from the net and much more capable software is commonly available commerically. This sea change calls for a re-examination of the language arts (reading and writing) curriculum for the purpose of integrating other common and future standards of communication. As a beginning point, giving much greater attention to image creation would provide the opportunity for numerous changes to school curriculum. These suggestions come with the understanding that a broad range of scope and sequence issues would need to be addressed to better incorporate image composition. Later chapters will consider the impact of increasing access to a wide range of linking (multimedia) tools.

    If pre-school and primary grades (BK-3) students were given as much access to computers with drawing and painting programs as to other forms of writing and drawing, many of the concepts of the writing process curriculum could become part of their thinking and habits before they learn to write. That is, instead of pre-writing, composing with text, editing/revision and publishing, the process would include pre-drawing, composing with images, editing/revision and publishing. Writing would be but another more abstract way to use an old and familiar process. Elements of such a strategy are already in place with an early childhood emphasis on art and with early readers using a heavy balance of image and little text, then an increase in text as readers gain more experience. Since the process has a long successful history with teaching reading, there is reason to believe that a similar process could be effective with writing.

    At the intermediate grades, better readers of text may not miss composing with images, but late developing writers will have a means and process of communication from which they can continue to draw parallels as they struggle to become comfortable with text composing. At this age, composition assignments should include many different variations of the NG Strategy as explained above in the section on Multimedia - Linking Text and Image. Applications with electronic slideshow composition tools that work across different computer platforms are excellent tools for such composition. For example, this would include current versions of Appleworks and Powerpoint.

    Based on the knowledge acquired in elementary grades, older students would be in a position to increasingly turn out publications that match the professional but every-day standards of current print standards and web publications with their balance of image and text.This would also set a foundational level of experience from which many other forms of communication would be incorporated in a given publication, including video, music, animation and more. This would increase the relevance and status of the nature of student learning, an important factor in motivation for teaching and learning.

    Summary

    This chapter has covered both practical and conceptual considerations with images. Imagery is a critical and basic part of communication systems at this point in the history of our culture. Especially for educators, this means that basic skills with image composition are just as important to current and future standards for public communication as skills with the reading and writing of text. The number of skills involved are many. The depth and complexity of these skills means that instruction and mastery over such skills will take some time to acquire. In turn, this would require integration into the scope and sequence of school curriculum over a number of years. Fortunately, computers have greatly expanded and accelerated our capacity to work with and communicate with images. Further, a claim has been raised that image is a primary and text a secondary consideration in the development and understanding of a new idea. The more that students can be involved in the creation of their own meaning through composing in media other than text, the more effective multimedia and learning efforts become.

    Next

    Having completed this review of concepts in still images, click the start button in the top left frame and click through the menu choices, exploring and completing steps and activities as assigned. Continue with this overall pattern in the chapters that follow.

    Still Image Bibliography

    Bibliographies: Still - Audio - Video - 2D - 3D - Sensor - interact - MM

    Coleman, Andrew M. (2001). dual-code theory. A Dictionary of Psychology. Retrieved December 14, 2008 from http://www.encyclopedia.com/doc/1O87-dualcodetheory.html

    Dehaene S, Spelke E, Pinel P, Stanescu R, Tsivkin S. (1999, May 7). Sources of mathematical thinking: behavioral and brain-imaging evidence. Science, 284(5416), 970-4. PMID 10320379.

    Dehaene, S., Piazza, M.; Pinel, P. & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506.

    Edwards, Betty (1979). Drawing on the right side of the brain. Boston: Houghton Mifflin Co. (later editions in 1989 and 1999)

    Golby, Alexandra J. 1,2; Poldrack, Russell A. 3; Brewer, James B. 5; Spencer, David 4; Desmond, John E. 1,3; Aron, Arthur P. 6; Gabrieli, John D. E. 1,3,5 (2001, September). Material-specific lateralization in the medial temporal lobe and prefrontal cortex during memory encoding. Brain. 124(9):1841-1854.

    Holder, M.K. (2005). What does handednes have todo with brain lateralization, and who cares? Retrieved December 10, 2008 from http://www.indiana.edu/~primate/brain.html

    Knecht S, Dräger B, Deppe M, Bobe L, Lohmann H, Flöel A, Ringelstein EB, Henningsen H. (2000). Handedness and hemispheric language dominance in healthy humans. Brain. 123(12), 2512-2518. http://brain.oxfordjournals.org/cgi/content/full/123/12/2512

    Levy LM, Reis IL, Grafman J. (1999, Aug 11). Metabolic abnormalities detected by 1H-MRS in dyscalculia and dysgraphia. Neurology. 53(3), 639-41. PMID 10449137

    Mayer, R. E. & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43-52.

    Mioduser, David; Nachmias, Rafi, Forkosh-Baruch, Alona (2008). New Literacies for the Knowledge Society pp. 23-42 in (eds), Joke M. Voogt, Gerald A. Knezek, International Handbook of Information Technology in Primary and Secondary Education. Springer. Retrieved December 8, 2008 from http://books.google.com/books?id=X2dIYc5PpTkC&printsec=frontcover#PPA23,M1

    Moreno, R., & Mayer, R. E. (2000). A coherence effect in multimedia learning: the case for minimizing irrelevant sounds in the design of multimedia instructional messages. Journal of Educational Psychology, 92, 117-125.
    Paivio, A (1971). Imagery and verbal processes. New York: Holt, Rinehart, and Winston.
    Paivio, A (1986). Mental representations: a dual coding approach. Oxford. England: Oxford University Press.

    Pink, D. (2006). A whole new mind: Why right-brainers will rule the future. New York:. Penguin Books.

    Sacks, O. (1986). The Man Who Mistake His Wife for a Hat. Picador.

    Sacks, O. (2008). The Man Who Forgot How to Read. Thomas Dunne Books.

    Taylor, Insep and Taylor, M. (1990) "Psycholinguistics: Learning and using Language". page 362.

    Ventures, Inc. (2008).Retrieved December 11, 2008 from http://www.eyetricks.com/3dstereo.htm

    Walde, Scott (1993). Random dot pictures. Retrieved December 11, 2008 from http://scott.saskatoon.com/code/rdot.html


    Chapter Parent Frame   | Textbook Home | Page author: Houghton - Web Office

    Original Pub.1.20.99: Version 3.05 Updated 1.1.2009.