Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most workers in tech are about to be hardcore cooked my guy. Not matter what you…
rdc_ohmv79w
G
All A.I. would have to do to kill us all is to wait. What is 10,000 years to an …
ytc_UgzWxg3rj…
G
So far have any 1 seems a robot holding a gun (not even talking about firing a w…
ytc_Ugy-mKW_g…
G
Regulations and taxes are the only solutions that could hold back the speedy ai …
ytc_Ugzzf_nRG…
G
The ai monitor issue in Reno is insane. Parking lots are filled with security ca…
ytc_UgwwaO3T9…
G
This Video is 100% a lie... based on lies and mixing up or even missing the stuf…
ytc_UgzudCc5s…
G
You would think they would be able to figure that out without wasting all this m…
ytr_UgxbbY4Ty…
G
If you ask an AI to make art, the result is still undoubtedly art.
Edit - thoug…
ytc_UgwEBIyhr…
Comment
"How would a robot know the difference between what's legal and what's right?" This deserves reflection, because I would argue that we are only pretending we know the difference ourselves. It seems like our world is moving closer and closer to fascist totalitarianism. Human beings actually crave order and some higher power with total control. Maybe that's the reason these for these robot overlords. Maybe in the future they will be seen as merciful executioners releasing us from our undesirable behavior. Required viewing: THX1138 https://www.youtube.com/watch?v=LV_18x3cpo4
youtube
2020-01-28T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzB0hWsFrZCkHqYAXF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnEOAU8JpZ64qGrs14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyPxPXM5LLBrIvCvdt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxHAG2eRj_MmPhkAbx4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzkkDtz_SMxzcO0Ml94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyttFx0rgxVHFiysf14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzuZKgDqAeAY6WTA894AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyL7xH8D5Q96XRCACN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwvoRfzxk-72qJWiA54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwpnozIsuwsKwHGPcJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}
]