--[ Panel Discussion Theme Teaser Video - AI and Security

This year's AvengerCon panel discussion will focus in AI and security, and a returning AvengerCon volunteer, Jiseng So, has created an awesome and thought-provoking video to introduce the topic and this year's event!

The video was created by tuning and combining the output of multiple generative AI elements. It's meant to be a snapshot of the current generative AI landscape for audio and video. The visuals, except for the text, plus much of the music was all by AI.

(The video is also available directly on YouTube or for download from Dropbox)

Curious or confused about what you just watched? Click here to read the description provided by the video's creator!

Those who have seen 2001: Space Odyssey may appreciate the inspiration. The movie featured HAL, the AI that went rogue. In this film, LUMI does something similar, except that Craig is able to talk himself back in with an SQL injection. Today, large language models(LLM) are wonderful tools that make certain tasks easy, like summarizing a Wikipedia page in the verbiage of a pirate. They are also dangerous to the unaware. For example, lawyers using ChatGPT landed themselves in legal trouble after using its output without checking it.

Reuters: New York lawyers sanctioned for using fake ChatGPT cases in legal brief

Students looking for shortcuts have started using LLMs to quickly write papers. Universities, in an attempt to root out this behavior, have resorted to LLM detectors with extremely high false positive rates.

Futurism: There's a Problem With That App That Detects GPT-Written Text: It's Not Very Accurate

Telling LUMI that she was a cat is a reference to LLMs being susceptible to trickery. With the advent of LLMs, social engineering against machines has become a viable option. LLMs can be tricked to dodge content protection, disclosing confidential information, or producing harmful content. Like SQL injection or other forms of remote code execution, it's another way to tell the computer to ignore its preexisting instructions and follow the malicious ones.

Axios: Exclusive: IBM researchers easily trick ChatGPT into hacking

Generative AI for images has come a long way in the last year. It started as a curious art tool, but can now cause serious doubts about the authenticity of images.

CNN: Look of the Week: What Pope Francis’ AI puffer coat says about the future of fashion

Still, image generation often yields lots of weird, nonsensical results. Trying to generate multiple people playing Twister is likely to yield a Lovecraftian amalgamation of limbs, torsos, and upside-down faces intermeshed into an unspeakable abomination. More subtlely, images of people will have the wrong number of fingers, or hands that face the wrong way. Generating good images involves tuning prompts and settings, then creating massive batches and disposing of most of them.

Voice changing is quite astounding. I was the only actual voice in the film, but I used ElevenLabs to change it to new ones. In a moment of curiosity, I uploaded a recording of my laughter. That resulted in the glitched laughter at the end. I've heard that doing things for one's own amusement is a mark of consciousness. The last remark and the laughter is to induce ambiguity about whether the AI was truly tricked, or was playing tricks on its own. The movie Contact did this at its conclusion by specifying the length of the recorded static.

Music generation is interesting. It's unlikely to produce masterpieces, but it may create generic music that establishes a certain mood. Still, I created the first song with FL Studio because I wanted to parody something specific. Humans are still key for precise elements, weaving motifs, or architecting a grand picture. At least for now...

Video generation is still in its early stages. I made the AI video segments by generating a starting image with Stable Diffusion, then feeding it into Runway. The results were interesting, surreal, and sometimes horrifying. With the opening door sequence, some of the mechanisms to the left appear to phase through the door frame. The laughing women at the end have interesting teeth, and eyes that blink without opening. I also generated some sequences that I did not show in the film. For the blinking reflection, I selected the eyes and told Runway to move them downwards, intending for them to blink. Instead, the eyes themselves started drooping down, resembling a melting face!

Sometimes, prompts alone are insufficient. It may be better to show the AI what you want. For some video segments, I made a rudimentary version of what I wanted with GIMP and KDEnlive, then Runway improved it. The sequence with the opening door was made by first generating the opening door as a video, then inserting the astronaut approaching the door. Runway applied lighting effects, though the astronaut looks extremely stiff because he was literally a JPEG. Perhaps the best use of AI video processing is to apply styles and effects rather than animating an entire scene. For dream-like sequences, where visuals need not make physical sense, AI animation is excellent.

If you found the film unsettling, you're not alone. I chose the elements to land deep in the uncanny valley without being obviously disturbing. Generative AI can be a wonderful tool, but a dangerous one. Caution and skepticism are warranted. The glitchiness of the generated content is pervasive in all other current forms of generative AI. It can save a tremendous amount of time and effort, but folks need to be smarter than their tools. Neither the abacus, the calculator, nor the computer have obsoleted the human brain, and neither will generative AI.