Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the problem is you would never be able to make all prople do it ethically. if g…
ytc_Ugzbph6_t…
G
@lucidmushrooms8270 It's not a she! It's a machine. It knows nothing! It can on…
ytr_UgxZ79lvv…
G
You literally can't even call AI artists artists.
It's legally closer to someone…
ytc_Ugx7J1X4f…
G
I think ai should only ever replicate things that cannot be done by humans, beca…
ytc_UgyfiNC7Y…
G
Isn't "projection" when person a unintentionally accuses person b of doing somet…
rdc_hsnvqse
G
Sora and Suno ai?
They’re def made by the same person.
Suno ai allows you to c…
ytc_Ugxc0dqQg…
G
AI is the newest hype that provides an atmosphere of fear, confusion, dominance,…
ytc_Ugxumoqka…
G
I find having a project you care about is how you learn outside of school.
Prio…
rdc_ipkomq9
Comment
Ever see a movie called, Colossus, The Forbin Project"!? I was predicting that something like this COULD happen back in the 70s, where a rogue AI takes control of America's nuclear arsenal, starts talking with it's Soviet counterpart, and, BOTH decide to take over! The humans attempt to throw a double cross, which was predicted, and, the AI, which catches it, has the scientists who were involved, executed, AND, blows one city nuked in retaliation! I MAY be mistaken on SOME of the details, BUT, at by the end of the movie, WE DIDN'T WIN! Just a story, sure, BUT, so we're the idea that a meteor could destroy the dominant species on a planet! If you say that this is just a theory, ask the 🦖🦕, okay!?
youtube
AI Harm Incident
2025-08-29T12:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw-bRznbNjTj8JygiF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyWnVSuzlt_VyDSQ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyjbcok5o9jPi1TOyB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwWPTViQXrm3-MXVll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZ3lzTOcpCcj75WsJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLhU8OtB7fWwX31vB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxiGFxewF1agWu2xwZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyITxNv9N1UC_a8NpN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxSMJUYlTkq4mCyDo14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySnEqjE5leF50qgdt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]