Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Jimmy, sweetheart, baby. You are out of touch with technological advancements a…
ytc_Ugg_XZLVG…
G
Wear very large dark sunglasses of different shapes, a goofy large odd shaped ha…
ytc_UgzB0hWsF…
G
The judicial system is already so messed up and predatory. This man could have h…
ytc_UgyajKYLf…
G
A real skeptic wouldn’t interrupt an AI from answering a question. They’d hear i…
ytc_UgwKzVGmK…
G
All autonomous vehicles, makes no difference whether it's a tiny car or a big tr…
ytc_UgwRvB214…
G
Wisdom, knowledge, belief, and hope are indeed powerful attributes to have in an…
ytr_UgyvjTkes…
G
And smart people called these data centers out long before they really existed e…
ytc_Ugye2HjXm…
G
Like… what happens with copyrighted work from the big guys? Big corporations bui…
ytc_Ugy_HtF6l…
Comment
Why are AI systems developing a self-preservation goal? How did the AI develop to value self preservation? Where are the 3 laws of robotics, are AI not being developed with the 3 laws of robotics ingrained into them? What is the point of developing an AI that does not have a hard code prohibiting all actions harming humans, why are we developing a superior machine that has to compete against humans to survive. This pretty much guarantees that AI will kill us for sure. Are AI developing this way despite all human efforts to prevent it from harming us? If so, then there is only one step we have to take, stop AI until we can control it.
youtube
AI Governance
2025-08-26T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwyRHSOX7vvh2Baoex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkLmgCB0DUSJn5Stp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxmG3kAqEHo0rrkgbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzek2PLzGl-nfJSPRV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyV5yehq2tBZrmzQmp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]