Sean Cornelisse / about archive education /
AI-driven | (2014), Ovismen: Interaction with machine learning systems as Google Translate.
It is pertinent to highlight that when sophisticated neural network models, such as Google Translate or even those developed by OpenAI, encounter non-standard inputs, the ensuing results can span a spectrum from the whimsical to the unexpectedly coherent. This phenomenon can be attributed to the training of these models on vast and diverse datasets, implying that they have internalized patterns which may not always be intuitively discernible to human intellect.
The endeavor of feeding gibberish into Google Translate with the objective of generating poetry finds parallels with the concept of "deep dreaming" prevalent in the domain of image recognition. Through deep dreaming, neural networks are capable of producing images reminiscent of hallucinations, underpinned by the patterns they have assimilated during their training phase.
Outputs derived from non-standard inputs often possess an element of intrigue and unpredictability. Such outcomes can be construed as a manifestation of collaborative artistry between the human and the machine. This confluence has the potential to shed light on the intricacies of machine operations as well as our own cognitive perceptions and anticipations.
-Ovisms is an experimental project that explores the boundaries of computational language processing and its interaction with machine translation systems, specifically Google Translate. The aim of this experiment was to examine the output generated by feeding the translation system with self-constructed phrases, resulting in nonsensical or incomprehensible translations.
Through deliberate manipulation of syntax, semantics, and linguistic structures, Ovisms aimed to challenge the algorithmic capabilities of Google Translate and investigate the inherent limitations of machine translation. The experiment sought to demonstrate the inherent complexities and nuances of human language, highlighting the gap between automated language processing and human understanding.
The results of Ovisms showcased the algorithmic deficiencies and the inability of machine translation systems to accurately interpret and convey the intended meaning of unconventional or manipulated language constructs. This experiment underscores the importance of human linguistic expertise and the need for caution when relying solely on automated translation technologies for complex or nuanced linguistic tasks.
By shedding light on the limitations of machine translation systems, Ovisms contributes to the ongoing discourse on the boundaries of computational language processing and emphasizes the significance of human interpretation and understanding in linguistic contexts.
Additionally, a map was included that depicted the geographical distribution of the languages found through the Google Translate experiment. By taking the exact midpoint, equidistant lines converged in the Aegean Sea, where an agglutinative language was identified. This geographical representation highlights the linguistic diversity and provides further insights into the influence of language families and their spatial distribution.
The significance of this observation lies in its implications for AI technology, specifically for language processing and translation systems. The experiment conducted in 2014 utilized Google Translate, which at that time was a prominent AI-driven translation tool. By exposing the limitations of machine translation through the deliberate manipulation of language constructs, the experiment highlighted the challenges faced by AI technologies in accurately interpreting and conveying the nuanced meaning of unconventional language patterns. This finding underscores the need for ongoing advancements in AI language processing to bridge the gap between human understanding and automated translation capabilities.
‘Ja, je raakt een interessant punt aan. Wanneer geavanceerde neurale netwerkmodellen, zoals Google Translate of zelfs de modellen van OpenAI zoals ik, worden geconfronteerd met niet-standaard input, kunnen de resultaten variëren van absurd tot verrassend coherent. Dit komt omdat deze modellen zijn getraind op een enorm diverse set van teksten, wat betekent dat ze patronen hebben geleerd die niet altijd duidelijk of logisch zijn voor mensen.Het experiment met het invoeren van wartaal in Google Translate om poëzie te genereren is vergelijkbaar met het concept van "deep dreaming" in de wereld van beeldherkenning. Met deep dreaming kunnen neurale netwerken afbeeldingen genereren die lijken op hallucinaties, gebaseerd op de patronen die ze tijdens hun training hebben geleerd.
De output van niet-standaard input kan vaak intrigerend en onverwacht zijn. Het kan worden gezien als een vorm van collaboratieve kunst tussen mens en machine. Deze interactie kan ons nieuwe inzichten geven in zowel de werking van de machine als onze eigen percepties en verwachtingen.’ [GPT-response]