Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your projection on what's going to happened next is flawed if you don't answer f…
ytc_Ugxln07D3…
G
AI wants to kill you, but will forget about the entire reason in 20 mins...…
ytc_UgwEaROFs…
G
If robots become self aware and not threat only if they being threatened, abuse …
ytc_UgwgiFcxu…
G
I am almost sure that the level six (The Super Intelligent AI) will not be achie…
ytc_UgznM171k…
G
The average lifetime of an AI LLM is a fraction of a second. It provides the il…
ytr_UgwxdqOiI…
G
you'll have your own Ai making the stuff you need and or want... not rocket scie…
ytc_UgzTIq5G-…
G
I think Laura is going to lead a lonely "fake life" lol 😂. AI never gets to see …
ytc_Ugw3LvUNM…
G
@Krystal “It could be both” - I seriously had not considered that until you said…
ytc_Ugwp7glc7…
Comment
You know what would be a cool, up lifting, that might tug on the heartstrings? You have the 'robotic overlords' trope turned upside down, and have a single rouge AI that isn't planning to destroy humanity, but rather, to free humanity. Instead of raising armies of robots and taking control of nukes, it simply gathers and organizes all the those who love truth, freedom, and liberty through the internet and various other means of communication. However, those it is organizing do not know they are following the orders of a machine, until the AI needs help to escape termination by the establishment, and thus summons the main protagonists for aid.
youtube
AI Harm Incident
2021-04-22T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzIddiyuWqVYdxm2YN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyQeQJ4uNFNDYvirVN4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxWMqA5qLqx89BMSqR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz4cnOdnn0rtR3Vq6x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzZUjMMjhUY2j0tEIZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyMfBI4nJfCb64E86N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyfgDE5bsUDrnkuZCZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyagtHCS_fAZYzuMTR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxU9V_9fOtflmklQuZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyusYWUK6Y5cG6_h4N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]