Tuesday, January 29, 2008
In cooperation with Martin Diamant, Günter Erhart, and “Best Before”
© Gebhard Sengmüller
"VinylVideo™ is a fake archaeology of media. We designed a device that retrieves video signals (moving image and sound) stored on a conventional vinyl (LP) record. The discontinuity in the development of electronic film technology constitutes the historical background for this fictitious video-disc technology: Even though television, the electronic transmission of moving images, had been feasible since the late 1920s, storage of these images became possible only after development of the video recorder in 1958. Recording images for private use did not become available until the mass introduction of the VCR in the early 1980s (!). Before, the average consumer was confined to use Super-8 film, a technology dating back to 1900, usually without sound. Recording of television was not possible at all."
(it's an article from Leonardo Magazine, so you can only read it if you are a subscriber or pay for the article)
Monday, January 28, 2008
Canonicalization: A Fundamental Tool to Facilitate Preservation and Management of Digital Information
in: LEONARDO, Vol. 40, No. 5, pp. 469–474, 2007 (as part of the refresh! conference papers)
"The author reinterprets the artistic phenomena that composed historical avant-garde art. His method of interpretation is an intertextual strategy that approaches the historical artifacts through recent phenomena. The first case study is of structural film; its most important attributes appear to be artistic strategies questioning the structural/material integrity, durability and permanence of the film work. The second case study is of the avant-garde
strategy of collective work, reinterpreted through the opensource work and interactive art
of today. The author identifies three steps in the development of the 20th-century concept of
joint creative work: avant-garde general strategies of artistic collaboration; avant-garde film
works oriented toward creative collectivism; and collaborative artistic practices that manifest
themselves in non-hierarchical strategies of contemporary interactive art."
- archive of festival catalogues:
- archive of festival catalogues:
Sunday, January 27, 2008
"For centuries, our presumption of authenticity has been premised on the presence or absence of visible formal elements and on an uninterrupted line of legitimate custody. The use of digital technology has not only reconfigured those formal elements, allowed for the bypassing of production controls, and made of physical custody an elusive concept, but, first and foremost, it has eliminated the original work, that is the first complete instantiation being communicated either across space (to persons other than the author) or time (saved for later access by the author or legitimate successors).
If electronic materials will ever be considered authentic as those on traditional media, the practices by which they are created, maintained, made accessible and used must be analyzed, and strategies and standards for their authentic preservation must be developed. This is the mission of InterPARES (International research on Permanent Authentic Records in Electronic Systems), a research endeavor that aims to develop the theoretical and methodological knowledge essential to the permanent preservation of authentic materials generated and/or maintained electronically, and, on the basis of this knowledge, to formulate model policies, strategies and standards capable of ensuring that preservation.
Increasingly, however, organizations and individuals have been generating works of a dynamic, experiential, or interactive nature, which will need different, and perhaps work-type specific, authenticity requirements and selection and preservation strategies.
Clifford Lynch describes experiential digital objects as objects whose essence goes beyond the bits constituting them to incorporate the behavior of the rendering system, or at least the interaction between the object and the rendering system.
[...] it is necessary to develop an understanding of the new digital objects, not only in the later phases of their life cycle, but from the moment of their creation.
[...] We have to consider the possibility of substituting the characteristics of completeness, stability and fixity with the capacity of the system where the work resides to trace and preserve each change the digital object has undergone. And perhaps we may look at this new digital entity as existing in one of two modes, as an entity in becoming, when its process of creation is in course (even if such creation is ongoing), and as a fixed entity at any given time the work is viewed."
about the Interpares project:
InterPARES1 was initiated in 1999 and concluded in 2001. It focused on the development of theory and methods ensuring the preservation of the authenticity of records created and/or maintained in databases and document management systems in the course of administrative activities, and took the perspective of the preserver.
InterPARES2 was initiated in 2002 and concluded in 2006. In addition to dealing with issues of authenticity, it delved into the issues of reliability and accuracy during the entire lifecycle of records, from creation to permanent preservation. It focused on records produced in complex digital environments in the course of artistic, scientific and e-government activities.
InterPARES3 was initiated in in 2007 and will continue through 2012. The project builds upon the findings of InterPARES 1 and 2, as well as of other digital preservation projects worldwide. It will put theory into practice, working with small and medium-sized archives and archival/records units within organizations, and develop teaching modules for in-house training programs, continuing education, and academic curricula.
Their terminology database (including the glossary, ontology and dictionary):
Richard Rinehart: A System of Formal Notation for Scoring Works of Digital and Variable Media Art (pdf)
(about:) Freud-Lissitzky Navigator: FAQ
1. What is Freud-Lissitzky Navigator ?
Freud-Lissitzky Navigator is a computer game prototype; a software narrative*; a virtual exhibition; an imagionary software; a tool to navigate through 20th century cultural history; an experiment in developing analysis of new media which uses the very forms of new media (in this case, computer games and software interfaces).
Software Narrative-- a theoretical or fictional narrative about software (L.M.).
2. Where is the actual game?
We will post the playable prototype once its reconstruction is complete. Note, however, that the project history and the accompanying historical images are are also important parts of the game. If in normal computer games, motion rides and other forms of new media "background story" is usually a prelude to the game experience, here this story becomes the main part.
3. What kind of gameplay can I expect?
The gameplay in Freud-Lissitzky Navigator will largely consist in navigating through the narrative of game development. Each part of the narative (i.e., Freud’s meeting with Lissitzky; Eisenstein’s contribution; Prague episode, etc.) will occupy a separate level. The player will also have a chnace to uncover various related cultural events of the 20th century for extra credit, hidden on each level.
4. Is Freud-Lissitzky Navigator a game or a software?
It is both. Since the key part of the game is the historical narrative, we are developing diffirent software interfaces to navigate through this narrative. They will be based on common software interfaces such as a database, virtual space, hypermedia, image composing, spreadsheet. For example, a Photoshop-like interface will allow composing of separate "text objects" (i.e., particular historical events) into a coherent narrative. A complementary "de-composing" operation will allow to decompose a snapshot from the game into its historical layers.
""The Museum Is History" may sound like a harmless statement of fact to connoisseurs of bronzes and bibelots, but to connoisseurs of bits it's starting to sound like a rallying cry. In a world where 21st-century economies of reproduction and distribution are displacing 19th-century economics of acquisition and collection, storing one-of-a-kind treasures in a secure, climate-controlled vault would seem to serve little purpose for digital artists and curators. Why rely on a museum to confer respectability when you can earn it yourself on the World Wide Web? To this way of thinking, "The Museum of the Future" is little more than a contradiction in terms.
Like so many of digital art's seemingly radical aspects, its disdain for the museum is less an unprecedented innovation than an inherited trait. The genetic debt in this case is less to those siblings that usually get pride of place in digital art's family tree--namely avant-garde film, video installation, and kinetic sculpture--but rather with its more distant cousins from the 1960s and 1970s--Conceptual, Process, and Performance Art. Working with the Guggenheim Museum's Panza collection of work from that period, I've come to understand why certain strategies for "dematerializing" the art object failed--and how digital artists working three decades later can learn vital lessons from these mistakes. Most importantly, my research has led me to the ironic conclusion that the most extreme departures from the material object, digital or otherwise, are ultimately the ones whose future depends on the very institution they were designed to render obsolete.
Dan Flavin's fluorescent light installations are a case in point. Flavin deliberately chose only the eight or so standard colors of fluorescent tubing readily available. When the Guggenheim and Dia Center for the Arts mounted a retrospective of his work a few years ago, the curators discovered that one of those colors, the deep cherry red, had been discontinued because exposure to a toxic pigment coating the interior of the tube presented a workplace hazard to its manufacturers. As a consequence, the collectors of Flavin's work have had to buy up all the cherry red bulbs they could find to store them for use in future shows. What an irony that works based on the seemingly infinite reproduceability of industrial fabrication are now stored in the Guggenheim's warehouse like so many Kandinskys and Picassos!
Present-day aesthetes of digital art snicker at the naivete of artists from the 1960s like Flavin, but meanwhile they commit the same mistake in a decade where technical obsolescence occurs much faster than the thirty years it took Flavin's fluorescent tubes to go out of date. For example, the Web did not exist in its present form five years ago; what is the chance that it will exist five, ten, fifteen years from now? Regardless of what replaces the Web--3D projections, a VR interface, smart walls--there is no guarantee that the public will be able to view projects currently designed for the Web in that new medium.
To be sure, some artists making work for the Web argue that the sheer number of html pages out there encourages the likelihood that the information they contain will still be accessible in the future. True enough--but accessible in what form? Html is already a variable format, which is why the same Web page can look different on different people's screens. Individual users can set their browsers to various page sizes and background colors, default typefaces and sizes, even whether to display images at all--while still accessing the same ASCII data. To me, however, this variability of html makes it all the more likely that it will evolve into something else ten years from now. I don't see any reason that the millions of information-based Web pages now being produced from Boston to Bankok can't be ported to some future network protocol, much as we now translate old databases like dBASE into newer formats like Access. The process of conversion will not capture everything, but the average user will--and should--push for that conversion because it will offer her new features.
The problem, of course, is that the features that would likely be lost in such a conversion would be exactly those features on which Web artists currently depend. Most art projects now being made for the Web rely not just on a specific collection of verbal or quantitative information, but on a carefully controlled visual design or play on the medium that enframes them. What will become of an artist's precisely rendered 800- x 600-pixel bitmap if screen resolution jumps to 10,000,000 x 10,000,000? It will degenerate into either a tiny image in the middle of an otherwise blank screen, or a big image with appallingly low resolution.
This is not an academic question for museums that are considering acquiring Web projects into their collections. In the most publicized case to date of a museum collecting a Web site, the San Francisco Museum of Art "acquired" several sites into its design collection, including graphics from the influential art site ada'web. The trouble is, these images were merely stored on a CD-ROM for future viewing, thus reducing a dynamic, netcast original to a static image with no external links. This is a bit like owning a Ferrari on an island without any roads; it looks nice parked in your yard, but that's not the way it was meant to be experienced. (Storing Web pages on CD-ROMs doesn't exactly solve the problem of the eventual obsolescence of CD-ROMs, either.) Of course, artists or curators in the year 2020 will be able to run a Web project locally provided they unearth a fossilized Pentium and copy of Netscape, or hire a programmer to write an html emulator on a state-of-the-art desktop computer. But those are local solutions. If direct distribution over a global network wasn't important to an artist's work, then was their art really made for the Web in the first place?
Given the dizzying pace of technological advances, some argue that Web sites and other online projects are inherently ephemeral and cannot be collected. The problem with this policy, however, is that it encourages museums to fetishize the more conventional objects that the market has already approved, while letting the most radical work "slip through the cracks" of art history. That in itself might not make museums obsolete, but it would make them awfully boring.
How, then, can we avoid repeating the mistakes of Flavin's generation? By learning from the generation of artists who immediately followed. Minimalist artists like Flavin allowed their work to be destroyed and recreated over and over, as long as it was in the same medium. Certain artists from the subsequent movements we know as Conceptual, Process, or Performance Art, however, conceived of what I call *variable media*. Exactly what aspect of the work was "variable" depended on the piece: it could be a relatively straightforward aspect like size and shape (in the case of Sol LeWitt's expandable wall drawings) or it could be the very content of the piece (as in Robert Barry's contribution to the PROSPECT '69 exhibition, which consisted of the ideas people had when reading his interview in the catalogue). Other aspects that could vary include configuration (Robert Morris's sculpture Permutation), composition (Barry Le Va's Ends Meet/Ends Cut installations), and even value (Douglas Huebler's Duration Pieces).
To be sure, LeWitt and Barry may have been thinking more about stretching the definition of art than about safeguarding their work for the future. But the strategies they invented offer digital artists and their collectors an alternative to packing away a cathode ray tube and Pentium PC in a climate-controlled crate. I've been discussing this alternative, which I call the Variable Media initiative, with my fellow curators at the Guggenheim. The idea is to:
1. Recognize when a potential acquisition involves a variable medium. In some cases, such as the LeWitt and Barry pieces mentioned earlier, the variability of the medium is intrinsic to the artist's intent. In other cases, as in most fluorescent sculptures or Web pages, it is unintentional and stems from the fact that the conditions in which the piece was originally created will someday no longer be available.
2. Contact the artist to determine if the piece has an "expiration date." Robert Rauschenberg, for example, considers his performances from the 1960s to be ephemeral works that cannot be recreated. Robert Morris, on the other hand, has restaged his performances from the same period in subsequent decades--with, of course, a different cast, props, etc.
3. Interview the artist and any others involved in the original production of the piece to gather as much information as possible about the degree of flexibility inherent in the definition of the work. Can the work exist in two different locations at once? A Felix Gonzalez-Torres can; a Donald Judd can't. Can the work be freely distributed without permission? The answer is yes for copylefted software or Lawrence Weiner's piece in Collection Public Freehold, no for most other copyrighted material. Can the work be recreated at a variety of scales? Bill Viola specifies the projected image of his video installation _City of Man_ down to the centimeter. John Simon, meanwhile, considers his Java applet _Every Icon_ appropriate for a 15-inch monitor, a thirty-foot videowall, or even a handheld Palm Pilot.
4. Acquire any documentation or objects, including scores, notes, props, photos, and moving images, that would be helpful to future recreators of the piece. The Whitney Museum owns a video of Barry Le Va making one of his felt scatter pieces, to help them recreate the installation if the artist is unavailable.
5. Make this documentation available to anyone interested in restaging the work. The museum would then give its imprimatur to any performance or recreation of a work which it felt was consistent with the artist's original intent. Note that this vests the museum with an authority it never previously possessed: the capacity to declare a restaging authentic or not. However, it also burdens the museum with the responsibility to make information about the piece freely available--and perhaps to participate in the process of translating the work into a new medium and context if the old one is unrecoverable.
What is the incentive for us artists to hand over decisions about the future of our work to a prim bunch of curators who've never wielded a paintbrush or written a Java applet? Very simply, to ensure that other people will be able to see our work in some form once we are pushing up daisies. Many artists will not want to give up that measure of control and will choose to let their works die with them--and that's fine. Nevertheless, as outlandish as the idea may seem to traditional collecting practices, the Variable Media initiative offers an alternative for those whose conception of their work goes beyond its manifestation in a particular form. And it helps us imagine the museum as an incubator for living, changing artworks--rather than a mausoleum for dead ones."
link to the directory of more of Jon Ippolito's writings:
Saturday, January 26, 2008
Based on the Symbol Theory of Nelson Goodman
and John Dewey's Philosophy of Experience and Education.
Research project, founded 2007
art, cognition, experience and education
Considering the consequences of a "systematic study of symbols and symbol systems and the way they function in our perceptions and actions and arts and sciences, and thus in the creation and comprehension of our worlds" (Nelson Goodman, Languages of Art), radical changes in our educational technology should be developed.
The exploration of the perception and creation of the various symbol systems provides a broader concept of understanding, including a philosophy of experience, taking also emotion and sensual perception serious as basic aspects of the human being and indispensable for creating meaning and self consciousness.
If media art is to be accessible to its potential champions and disseminators, to being curated, researched and studied, then new partnerships need to be initiated and ingrained rivalry behaviour patterns replaced by constructive exchange between media competence, creative and theoretical, scattered across the globe and, wherever, gained through much effort, and the infrastructures and resources that have developed through the years. That is the only way that sustainable, i.e. self-supporting projects can be launched with good prospects of survival. M.A.D. sees itself as an information system true to these tenets, at the disposal of media art and its theory; and as a networking interface between material and knowledge while being as independent as possible of individual figures, locations, their respective preferences and interests.
'Top-level networking' in media art history and theory demands a transparent and universally accessible information system that does justice equally to the work done by the sites of production, distribution and data archiving and by the individuals engaged in the provision and scholarly processing of information on media art. Previous experience world wide has shown that the laborious, but for the survival of that information system, crucial task of compiling such a database is at odds with the hitherto customary top-down structures. The signs are that in the longer term, there will be no real alternative but that media theory and practice must meet at eye level.
M.A.D. therefore proposes the networking bottom-up structure to be the decisive motive force in assembling potent aggregates of knowledge and expertise. That should be the forum whence both are delegated/constituted, the 'distributed editors' and an 'advisory board' responsible for development and co-ordination.
M.A.D. should offer a suitable platform of presentation, distribution and interaction - for the proponents of horizontal and collaborative, grass-roots networking with social-networking sites, Wikis and other Web-2.0. attributes, and for artists, theorists and developers interested in setting up a semantic Web 3.0.
The quality of the M.A.D. is to be assured and sustained not least by minimising editorial stipulations so as to maximise the input from distributed competent potential contributors. Acquiring data and information 'first-hand' will also reduce the time and effort called for in gaining permission from copyright holders and in other editorial matters, as experience has shown - and as users are coming to expect. Lastingly minimised costs and increasing data acquisition speeds are at once among the M.A.D.'s foremost objectives and among the preconditions so that the ultimate aim might be achieved, i.e. the sustainability and durability of data and discourses.
M.A.D. AS A MODEL
The Internet in the (as yet) Browsing Age is without doubt a playground of entropy and a lack of structure - and so, of frustration. Luckily it also lets every and anyone see how removed our offline world still is from a functioning democracy. It shows us the useless waste of creative potential while it is suppressed or squandered.
All, artists, academics and researchers, those on the left, liberals and conservatives, have long understood that human attention represents an economic value. All the more reason why the already acute proliferation and abuse of testimonials, evaluation services, grading and selection, sloganisation and individual criteria should be discussed in an open, decentralised manner and at the same time be evaluated by 'third parties'.
That current state-of-the-art terminology should almost inevitably lead to concepts expressed as tagging and web 3.0 or the Semantic Web lies in the nature of a living language. Irrespective of whether or how far we are removed from this vision, the relevance of M.A.D. will be given, too, in the context of these most recent developments. If the web of the future - the 'Semantic Web', the 'Live Web', the 'Intelligent Web', 'Web 3.0' - may be described as a database of databases, then the M.A.D. could, in the sense sketched out here, serve as a model; for the methods of recording used in and for media art projects are pre-eminently suited also for the description of any 'objects' and 'subjects' real or virtual.
JOIN AND BUILD UP M.A.D.
Let's find out what we may achieve together in rule-free discourse, ubiquity, global consciousness, open and active archiving, and distributed knowledge. Become a part of the M.A.D. community and help shape M.A.D. as a model for a meeting of creative competence and competent discussion in media practice and theory. Please visit M.A.D., find out the missing links, and help create new mind mappings out of yesterday's and the future's media meta-noise. With your contribution, the important initial steps so far taken by media creators, curators and thinkers could continue in a more constructive, joint way for any of the interest groups mentioned (and not mentioned) above.
note: head Slavko Kacunko
|Their self description: |
"The Variable Media Network proposes an unconventional new preservation strategy that has emerged from the Guggenheim’s efforts to preserve its world-renowned collection of conceptual, minimalist and video art and that is supported by the Daniel Langlois Foundation for Art, Science, and Technology. The aim of this affiliation is to help build a network of organizations that will develop the tools, methods and standards needed to implement this strategy."
|"The variable media paradigm pairs artists with museum and media consultants to provoke comparison of artworks created in ephemeral mediums. The initiative aims to define each of these case studies in terms of medium-independent behaviors and to identify artist-approved strategies for preserving artwork with the help of an interactive questionnaire."|
"For artists working in ephemeral formats who want posterity to experience their work more directly than through second-hand documentation or anecdote, the variable media paradigm encourages artists to define their work independently from medium so that the work can be translated once its current medium is obsolete.
This requires artists to envision acceptable forms their work might take in new mediums, and to pass on guidelines for recasting work in a new form once the original has expired.
Select an option at left to learn more about how this philosophy can be applied to specific cases."
"The Database of Virtual Art documents the rapidly evolving field of digital installation art. This complex, research-oriented overview of immersive, interactive, telematic and genetic art has been developed in cooperation with established media artists, researchers and institutions. The web-based, cost-free instrument - appropriate to the needs of process art - allows individuals to post material themselves. Compiling video documentation, technical data, interfaces, displays, and literature offers a unique answer to the needs of the field. All works can be linked with exhibiting institutions, events and bibliographical references. Over time the richly interlinked data will also serve as a predecessor for the crucial systematic preservation of this art."
"Media Art Net www.medienkunstnetz.de
Media art—by definition multimedia, time-based or process-oriented—cannot be sufficiently mediated in book form. Mainstream art and cultural mediation, still being primarily print-based, do little justice to its specificity. On the other hand, Net-based media have not yet been able to establish platforms that reach more than the usual circle of insiders. Introducing the range of topics related to media and art, «Media Art Net» thus aims at establishing an Internet structure that offers highly qualified content by granting free access at the same time. Tendencies of art and media technology development throughout the twentieth century serve as the background for promoting historic and contemporary perspectives on artistic work in and with the media. A combination of diverse representational modes will offer a condensed, attractively presented multimedia focus for the interested 'surfer,' as well as profusely documented in depth information for users specifically involved in research. The main objective is, therefore, to establish theoretically and audio-visually convincing forms of relationships and references that cross the boundaries of genre. A consistently bilingual version (German/English) further transmits the international character of this undertaking.
«Media Art Net» foremost promotes topical cross references offering various access points:
- specific: the classic index and search engine based on a complex structure of database links
- the exploratory approach: via visual summaries
- the artistic perspective: as it emerges in newly commissioned Net projects by Blank&Jeron, Ismael Celis, Daniela Alina Plewe and others
- scientific-historic aspect: as formulated in topical essays by competent authors
An initial step is the exemplary survey of historic and current positions and contexts of media art. In a second phase, exploring eight thematic topics locates seminal interfaces between media and art. A network of curators presents a variety of approaches and contexts:
- Aestethics of the Digital MECAD, Barcelona, Claudia Giannetti
- Image-Sound-Relations HGB, Leipzig, Dieter Daniels
- Cyborg Bodies HGKZ, Zürich, Yvonne Volkart
- Generative Tools IMG, Mainz, Tjark Ihmels
- Photo/Byte HGB Leipzig
- Art and Cinematography HfBK, Dresden, Gregor Stemmrich
- Mapping and Text ZKM, Karlsruhe, Rudolf Frieling
- Public Sphere_s ZKM, Karlsruhe, Steve Dietz"
Browsen durch die Datenbank von netzspannung.org
|Zentraler Bestandteil von netzspannung.org ist das öffentlich zugängliche Archiv. Es umfasst zahlreiche medienkünstlerische Arbeiten, Projekte aus der IT-Forschung, sowie medientheoretische und kunstwissenschaftliche Vorträge in Form von Text-, Bild- und Videodarstellungen. Im Archiv finden sich redaktionell bearbeitete Inhalte und Beiträge der Community.|
|netzspannung.org folgt dem Verständnis, dass Kontextualisierung und Visualisierung entscheidende Faktoren für die Erschließung und Aneignung von Wissen sind. Daher bieten die Archiv-Interfaces alternative Zugänge zur Datenbank von netzspannung.org.|
Das Classic-View-Interface stellt die Inhalte der Datenbank in Listenform dar. Dabei ist die Bildinformation ein wesentlicher Bestandteil und unterstützt einen visuellen Zugang zu den Inhalten der Datenbank. Die Inhalte (Projektbeschreibungen, Veranstaltungen, Artikel) können nach unterschiedlichen Kriterien geordnet und gefiltert werden.
› Zur Classic View
Der Archiv-Browser bietet einen Überblick über alle Inhalte der Datenbank. Sie sind nach unterschiedlichen Kategorien (Personen, Inhalte, Schlagworte, Neuste Einträge) geordnet, in Indices zusammengefasst und in alphabetischen Listen dargestellt. So können sich die NutzerInnen beispielsweise einen Überblick über alle im Archiv geführten Personen verschaffen oder die Datenbankinhalte gruppiert nach Schlagworten durchsehen.
› Zum Archiv-Browser
Das Randomizer-Interface generiert automatisch bei jedem Ladevorgang eine zufällige Auswahl von 30 Bildern, die jeweils auf einen Eintrag in der Datenbank verweisen. Es bietet einen ausschließlich visuellen und intuitiven Zugang zum Archiv.
› Zum Randomizer
Die Semantic Map stellt das Archiv in einer Übersichtskarte dar, die die Datenbankeinträge von netzspannung.org nach semantischen Kriterien strukturiert und visualisiert. Die Semantic Map bietet neue Möglichkeiten, semantische Bezüge zwischen Inhalten aus verschiedenen Disziplinen zu entdecken. Da das Interface auf einer Textanalyse beruht, wird für deutsche und englische Texte jeweils eine eigene Karte zur Verfügung gestellt. Die Inhalte der beiden Karten unterschieden sich, da viele Datenbankeinträge nur in einer Sprache vorliegen."
"Created in the spring of 1997 through a donation from Daniel Langlois, the Daniel Langlois Foundation for Art, Science, and Technology is a private, non-profit charitable organisation with international activities.
The Foundation aims to further artistic and scientific knowledge and understanding. Through its actions, it seeks to bring art and science closer together within a technological context. On the one hand, the Foundation nurtures a critical awareness of the impact of technology on human beings and their natural and cultural environments. On the other hand, it promotes the exploration of aesthetics suited for environments shaped by human beings.
The Foundation's programs are designed to further learning among individuals, groups and organisations in order to promote new knowledge and new uses of digital media and information technology.
For the Foundation, the concept of knowledge is based on interactions among researchers, artists, scientists and other individuals, as well as organisations who are both the source and recipients of various forms of knowledge. It therefore also seeks to promote the emergence of knowledge founded on local practices that contribute to the growth and well-being of people in their communities and milieu."
"DOCAM’s main objective is to develop new methodologies and tools to address the issues of preserving and documenting digital, technological and electronic works of art. Over the project’s five-year mandate, numerous case studies will be conducted that will focus on documentary collections and conserving works of art featuring technological content.
Museums of modern and contemporary art today are facing a number of new challenges that have surfaced with the recent surge in works of art that feature technological components. These works deteriorate as their original elements break down, and the context of technological development too often eludes specialists and historians. Created during diverse eras, the works may be analog, digital, mechanical or electronic; they are also often multimedia and comprised of materials that range from machines, software, electronic systems and analog or digital images to traditional (sculpted and pictorial elements) and non-traditional (industrial material and techniques) mixed media. In fact, cultural institutions are grappling with two types of problems. On the one hand, they must create effective strategies to preserve past works of art featuring technological components. On the other hand, they must record, preserve and understand the technologies on which these works were based, with professional rigour as well as an in-depth comprehension of the historical context within which the technologies in question were developed. And these problems are not limited to contemporary art and museums. They are also found in culture industries, public heritage institutions, and institutions of higher education that have amassed collections of teaching or research material over recent decades.
This situation becomes that much more perplexing when one realizes that curators, art historians and conservators have not been adequately trained to deal with the new problems surrounding the documentation and preservation of works featuring technological, electronic or digital components. Their education in this regard is insufficient, because only a handful of art history and conservation programs in Canada focus on this reality. Numerous research projects are being conducted in the archival management domain on the preservation of electronic documents, but very few such projects exist in the specific field of art history and conservation. Standards and indeed a descriptive vocabulary for such artistic works are lacking and do not allow for precise and adequate documentation of these works. Historical documentation is generally rare and poorly preserved, and the Centre for Research and Documentation (CR+D) at the Daniel Langlois Foundation for Art, Science and Technology is one of the few places in the world to document the field of electronic and digital art."
"From March to December 2003, the archive team of V2_Organisation (a center for culture and technology in Rotterdam, the Netherlands) has conducted research on the documentation aspects of the preservation of electronic art activities -- or Capturing Unstable Media --, an approach between archiving and preservation.
For this purpose, two major case studies were investigated in depth -- both are projects that were co-developed at V2_Lab, the interdisciplinary workspace of V2_Organisation. The case studies were
- whisper by Thecla Schiphorst and Susan Kozel, a project related to wearable technologies, performance and the body;
- DataCloud 2.0 and previous and later DataClouds; projects that involve web-based 2D or 3D visualizations of complex information structures.
Based on the findings from these case studies, a series of recommendations were formulated in the following areas:
- documentation strategies for electronic art activities;
- formal modeling and metadata;
- archival interoperability.
Furthermore, a number of technical realizations were implemented, including a public, web-based portal for V2_Archive and a new technical framework for the archive, based on XML and RDF technologies."
"Intentions and Objectives
ActiveArchives is concerned with electronic, time-and-space-related artworks:
- conducting research on those which have been forgotten,
- restoring those which are acutely endangered,
- conserving those which are no longer topical but still functional,
- registering and documenting those which have just been created.
High Technological Standards but Accelerated Ageing
Electronic artworks of analogue or digital nature have broadened the understanding and self-image of artistic production, and have thus changed them. The traditional artistic skills have been expanded by media and communication strategies which are based on technology. Contemporary artworks are based increasingly on audiovisual, electronic, and information technologies, and so make use of the most demanding technical standards of their time. Thus, the electronic artwork is also subject to the conditions and contradictions of its time: rapid change and renewal are characteristic of its production, whereas its recognition, recollection, consolidation, and traditionalisation are forms of deceleration, which seem merely to impede the dynamic quality of advanced technology.
Increase of Value and Restoration Reality
Within the past ten years electronic artworks have experienced a substantial increase in value and popularity. Museums and archives are confronted with new problems: though they have long used information technology as a tool, they are perplexed when they encounter collection objects which are themselves in a technological shell. Many videos have been lost forever, while others are in miserable condition. Curators are often confronted with the unpleasant alternative of deactivating installations or of modifying them in such a way that their authentic character is falsified: Such inadequate "modernisation" can result from ignorance of the historical dependence of the artwork, but also from the lack of appropriate conservation strategies. Five to ten years after a work has been created it becomes very difficult to gain access to certain reproduction units: laser disc players, monitors (even of a common type), crt projectors, and even now LCD projectors. This situation also results from the fact that these works are increasingly sold not as a conglomerate of hardware and software, but only with a limited performance licence, with limits imposed on place and duration of exhibition.
The Electronic Artwork: Material Complexity and a Challenge for Curators, Restorers, and Art Researchers
ActiveArchives understands the electronic artwork as a unified whole, whose individual electrotechnical elements, audiovisual components, and those components made of other materials, must remain united. In addition to audiovisual image production and re-presentation, the material complexity of these works extends from the application of all manner of plastic, wood, and metal to the use of various electrotechnical instruments and electronic elements, to the application of photographic and painting procedures, and even to architectural structures and lighting technology. All of these constituents must be considered with regard to restoration measures, conservation concepts, art historical models of interpretation, and scientific description. Our goal is to make authentic re-performance possible, which, on reflection, gives this term key significance. This undoubtedly new approach is quite different e.g. from the mere transfer of informations to another medium. Of course, digitalising collections is also a crucial subject and area of research for ActiveArchives. Each transfer slightly changes the structure of the image and therefore the original substance, which thus demands the utmost caution in handling the transfer.
Goals and Procedures: Long-Term Perspectives and Institutional Cooperation
The concept of ActiveArchives comprises not only restoration measures and their development in the field of electronic art, but also scientific registration and interpretation of the artwork. ActiveArchives conducts research on artworks, secures them, and makes them accessible in an appropriate form: integral, partial, or as documentation. The forms of this accessibility will, in some cases, have to be developed, as will the inventory parameters. The resulting secondary information will also be stored and made accessible. ActiveArchives combines technological, art historical, and restoration information, and also conducts its own research in these fields. The findings are passed on to involved and interested institutions, such as museums, collections, research institutes, and to artists. They are also dispersed in education and publication to (future) restorers, artists, and art researchers, and to internet-based projects.
As works, information, and rights dispersed here and there are already available, it is indispensable that the project will be developed in close cooperation with the appropriate institutions, or that it even emerge from them. The project will be meaningful only if it is organised and guaranteed on a long-term basis. For this reason ActiveArchives seeks cooperation with those institutions which are most likely to provide continuity. With the works, there is no such choice: the owners, and possibly the producers and authors of the works, will be our partners. This form of partnership will be limited to the work which arises in connection with pertinent artworks in collections.
Detailed information and definitions of terms which are inferred from the direct experience of ActiveArchives with relevant media, institutions, and disciplines, and which characterise the self-evident nature of ActiveArchives, can be found in the „Glossary“ ."
Thursday, January 24, 2008
And the "Research Report on JPEG 2000 for Video Archiving":
"This is an open-source wiki book: You are encouraged to add to it, to update it, and to correct inaccurate facts... It is based on the manuscript of New Media Art, a book written by Mark Tribe and Reena Jana and published by Taschen in 2006. The Taschen book is available in French, German, Italian and Spanish in addition to English. This wiki book is not intended as a substitute or replacement for the Taschen book, but rather as an expandable educational resource to which artists, curators, students and others may contribute."
The second link is for preservation guidelines.
Excerpt from their introduction:
"This paper concentrates in the first hand on the preservation of video tapes. Also other forms of immaterial variable media art works - or unstable media - might of course need preservation. When the paper concentrates on art works on video tapes it is in the first hand because THE DANISH VIDEO ART DATA BANK has it's point of origin in a video workshop -in the second hand on the evaluation that it just now is urgent to do something with the old video tapes to save the art works created with and on the "traditional", "old-fashioned" video tapes - especially out of experiences with the older U-matic tapes in the Archives of THE DANISH VIDEO ART DATA BANK."
Their definition of terms:
... covers all the activities and functions which help to make a suitable and safe environment for the video tapes – the comprehensive programs and activities to safeguard the tapes, both old and new and both now and for the future. This also includes conservation and restoration.
... should stabilise and prevent further deterioration and damage to the video tapes and may include cleaning tapes and maintaining equipment and providing proper storage and handling procedures.
... covers both the restoration of the physical media and of the recorded information and may include actions to repair damaged tapes and equipment and to stop further deterioration in order to be able to access and playback the recorded information."
"The artist’s intent versus the Archives’ intent
We have already quoted some of the critical remarks from RHIZOME DIGEST: April 13, 2001 to the idea of the questionnaire and the comments in RHIZOME DIGEST: April 20, 2001 from Jon Ippoliti, Guggenheim.
Anyway you might say that just because a museum or archive has acquired a video art work or the artist has deposited it with the museum/archive it does not imply that the museum/archive can do whatever it likes with it. There has to be a mutual agreement between the artist and the museum/archive clearly stating the intent of the artist concerning how and if the museum."
Maybe someone can tell, if the questionnaire below is complete or still the current version?
"Video Art / Media Art Preservation: Studies and Suggestions
THE DANISH VIDEO ART DATA BANK
Once again: the Ethical Considerations
We have already referred to the pilot project by Montevideo. They asked the moral and copyright question: Can you at all take the liberty to ”change” an art work, produced at an analogue form by transferring it to a more or less compressed digital carrier? By definition, digitisation of an analogue video work means changing both the carrier and the playback equipment of the video work.
The Guggenheim Variable Media Questionnaire
Through the “Variable Media Initiative” we have mentioned in some detail The Guggenheim Museum in New York tries to solve the problem with a questionnaire to the artists. The questionnaire – still only in a beta version (1) - distinguish or divide the variable media artworks in
1. artworks that can be installed
2. artworks that can be performed
3. artworks that can be interactive
4. artworks that can be reproduced
5. artworks that can be duplicated
6. artworks that can be encoded – and
7. artworks that can be networked
and ask the artist to answer different questions in a multiple choice questionnaire
If we take no. 4 as an example that refer to video art works the following issues are put forward and the artist has to answer them both according to “current state” and how they “can vary” and she/he has also the possibility to comments on the issues/answers:
- original audio format
- original photograph format
- original film format
- original video format
- original print format
- location of master
- status of master
- acceptable submasters
- fate of submasters
- permission to create submasters
- permission to compress/digitise
To d) you have the choice between: u-matic/betacam sp/vhs (Pal, NTSC or SECAM) or others which you have to specify.
To f) you have the choice between: archive with work’s owner, archived in another location (specify), used for exhibition (explain) or not applicable.
To g) you have the choice between: still viable, remastered (specify format of new master), not applicable.
To h) you have the choice between: for exhibition, for research, for archive, for public distribution, not applicable.
To I) you have the choice between: require the borrower to destroy, require the borrower to return, distribute freely, other (explain, not applicable.
To j) you have the choice between: not required, required from the artist or estate, not given, not applicable.
To k) you have the choice between: for low-resolution distribution, for high-resolution distribution, a combination of above (explain), not applicable.
For each of the above 7 types of variable media types you have to go into detail about “In later re-creations, this artwork could be …
We have already treated the meaning of these words but we repaet the explanations here
If we again take no 4. as applicable to video art works then you have to answer the following two options/questions concerning a):
Source: Should an obsolete source master, such as a video tape … be restores as necessary to make exhibition copies?
Access to previous versions: should previous versions of the work be stored so general public can view them?
To b) you have to answer to the questions:
Source: Should the experimental effect of an obsolete source master, such as a video tape …, be reproduced in an entirely new medium (e.g. digitising an analogue tape)?
Access to previous versions: Should the remastered source be “marked” with the effect of previous display technologies?
To c) you have to answer the questions:
Source: Should an obsolete source master, such as a video tape … be migrated to a new medium that has become industrial standard?
Access to previous versions: (There are no migration options for this problem).
To d) you have to answer the questions:
Source: Should an obsolete source master, such as a video tape …be re-recorded according to the artist’s instructions?
Access to previous versions: (There are no reinterpretation options for this problem
… and to each of the options/questions to a, b c and d you have the choice between: preferred, acceptable, discouraged, inaccessible and not applicable.
(1) You may find the beta version of the Guggenheim Variable Media Questionnaire at http://three.org/ippolito"
Wednesday, January 23, 2008
Panel 4: Techno-Historical Collusions: The Making Of A Trojan Horse @ Auditorium
Moderator: Florian Cramer [nl]
Participants: Eva Horn [de], Trevor Paglen [us], Pierre Lagrange [fr], Konrad Becker [at]
At first glance there may seem to be no link between the mystical worlds of witches and medieval Kabballah with the hightech-realities of the space program and ‘Data Trojans’. But – all of them mingle the unexplainable with politics and technology, and apply the fictionalisation of information to achieve narrative goals beyond the confines of their central practice. Space research conspires with military interests, which in turn unleash a multitude of events in the form of media, software and information Trojan Horses. Iconographies, a culture of the belief in speculation and an opacity of reason result where the unseen mechanisms behind an event take a dominant position over reality. Can we therefore challenge the inexplicable and rationalise the theory out of the perception of conspiracy?
Tuesday, January 22, 2008
"Modern scholars often note that early engineers did not supply formal working drawings of their devices, but represented them in real time, functioning, in a way that did not give away their secrets but could appeal to patrons. Fontana, however, makes a superb exception to this rule. It’s true that he didn’t show how to make my personal favorite among his military devices, the Monty-Pythonish fire-farting rabbit."
excerpt from the text of Anthony Grafton.
Monday, January 21, 2008
(also see post on Aquaint)
that's what they say about their goals:
" We will work with content providers to make their materials more accessible, and to study patterns of use of their video by appropriate communities of users. Our goal for unlocking the information embedded in video for easy access by the student, teacher, journalist, scientist or home user has the potential to create video resources with significant educational and commercial value. Our approach of automatically processing large libraries will stress current limits on network and i/o bandwidth, disk space and processor speed. Our focus on video as a searchable resource has broad implications for information gathering and dissemination."
timeline of video formats
also, don't miss this:
in the "migrate digital" section, bottom of the page:
"The Material eXchange Format (MXF) is an open file format intended for the interchange of audio-visual material with its associated data and metadata.
It was designed to improve file based interoperability between servers, workstations and other content-creation devices. These improvements should result in improved workflows and in more efficient working practices than is possible with today's mixed and proprietary file formats.MXF was designed by the leading players in the broadcast industry – with an enormous amount of input from the user community – to ensure that the format really meets their demands. It is being put forward as an Open Standard which means it is a file transfer format that is openly available to all interested parties.
It is not compression-scheme-specific and it simplifies the integration of systems using MPEG and DV as well as future, as yet unspecified, compression strategies such as JPEG2000. This means that the transportation of these different files will be independent of content, and will not dictate the use of specific equipment. Any required processing can simply be achieved by automatically invoking the appropriate hardware or software codec."
"The video experience, as uniquely gained through The Video Guide, is not merely designed to inform, but also to entertain and perhaps even enlighten as well. It is hoped The Video Guide will serve as a useful and faithful companion during the evolution of your own Video Adventure."
Charles Bensinger, Author
The complete publication can be downloaded at the videopreservation website from Stanford University: http://videopreservation.stanford.edu/vid_guide/index.html
The base technology developed under Informedia-I combines speech, image and natural language understanding to automatically transcribe, segment and index linear video for intelligent search and image retrieval. Informedia-II seeks to improve the dynamic extraction, summarization, visualization, and presentation of distributed video, automatically producing “collages” and “auto-documentaries” that summarize documents from text, images, audio and video into one single abstraction."
Informedia Aquaint II (click to enlarge img)