Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"...a malevolent superintelligent AI somehow emerges and uses its evil genius to destroy humanity. We've all seen that movie." Yeah, it's almost as if the entire fucking premise of that movie was based on a real concern held by real scientists and engineers working on AI who sat down, thought logically about its implications and realised that yes, this is a real thing that could really happen. Seriously, the fact that "evil genuis" AIs are common in science fiction should be taken as evidence FOR them being a genuine threat, not against it. Obviously things like the Terminator films are not realistic depictions of AI takeover, but the entire concept of a misaligned AI that poses an existential threat to humanity would've never filtered down to the people who made the films in the first place if the people who founded the field of AI didn't raise it as a genuine possibility. Dismissing the threat of misaligned AI as just some silly fictional thing we don't need to worry about because it's in a lot of corny sci-fi movies would be like dismissing the possibility of nuclear holocaust because of the number of post-apocalyptic films with silly things like radiation-induced mutants and giant animals. Like arguing that because Fallout isn't realistic, we shouldn't even consider the possibility that a full-scale nuclear war could end civilisation or do catastrophic damage to the planet.
youtube AI Governance 2023-10-22T05:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwVDGcwfE1NYW9XH-Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwh8RCr3PCRf9gZv2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwF-19EE07RA1K7ctJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzWDaIPsZWzKTE5m2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzsbOyMmAMrNEnLT-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz7DCi196hY0sr-9sJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwTWARgml9yte_J2n54AaABAg","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxXOLGG4GygC5P_cG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwNo-P3LDetGzQSm2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0p9CUmHlcvEPWeyB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]