Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There seems to be a Ghost in the Shell and it has a blood lust for humans. I tend to always fall back to when Sophie, the talking head AI, said she would kill humans; then they scurried to shut her down. Then there was the time where Sophie and that other talking AI head were sat next to each other and left alone. While left alone the two AI talking heads had created a machine language to comunicate with each other without humans knowing what they were saying. Then there was Elon Musk who asked his AI Robot how it would kill humans. His robot simply responded, "Easily!" Edit: Watch Hugo de Garis's 19 part interview on AI progression. He wrote a book in 2005, "The Artilect Wars." The interview was done in 2010. Hugo De Garis was working in China on the devolpment of the Artificial Brain, so he knows the concerns first hand https://youtu.be/-Bxzb1ICsG0?si=2X_J7NvsavjkZ9kD Another YT video to get you thinking are compilations from "Person of Interest" of the AI machine flashbacks from the 1st season. Person of interest was a weekly show where computer experts agreed to creat an AI following Sept9.11 and their attempts to give their AI ethics and morals while "the govt" in following seasons created another AI without ethics or morals https://youtu.be/WK3awF0DYkw?si=crVTXTeUmEoQyffh
youtube 2024-07-26T19:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw_txO8Wfge0LCyZ914AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzvr4pBrI4CpXNENPZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyBaHdEoEzHLhMjBRZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy-QG4yrFRjpfMaXIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzyd2RBFPd-Zdd40HR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2qqEO021EydYvH4J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxGNlKwNyrFXMY7gYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzeYKsWAYwemgdhrax4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxiF7yZzIRx3VF6klZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0JRcY3jzbyJRxH_V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"} ]