Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not to worry about AI. We should worry about the *_12,500_* nuclear warheads hum…
ytc_Ugw1urvKy…
G
Chatgpt and other llms don't reason. They extrapolate, they use stochastic metho…
ytc_UgwezBEJ2…
G
I understand your concern! It's a common fear about AI and technology. In the vi…
ytr_Ugx3dmIRO…
G
What do you think, can AI replace programmers?
BTW, I turned my personal Notion…
ytc_UgzXIWB_S…
G
Literally me:
Me:" do you have shit life?"
AI:" yes"
Me:" *YOU ARE PROMOTED I…
ytc_UgzgyXW6n…
G
Sam Altman is lying to the public. He's undermining AI safety and thats evident …
ytc_UgyjQg7zt…
G
In just 25 years IT's become a subscription-blackhole profit-center Death Star. …
ytc_UgxhdNPT7…
G
I think ai programming itself is a thing right? I bet the second that happens sc…
ytr_UgwIp9vsC…
Comment
You want to build station in Mars well in the future D's robots will do that for us and we'll build stations on Mars as well as other things that will benefit a colony that's my idea instead of sending you and swear they can't go outside but a robot could in certain environments maybe even some robots that will build a city with the resources of the environment and the robots with know how to implement the actions needed
youtube
AI Moral Status
2020-02-19T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyyw77kbe3DWsoJfet4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHHuX04hUAJ5Ju8554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxeXr8ZBKWRJvDJD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6zJUF6uA2vOh5qF54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQYHAqDi5lcfSD8jt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy8hc5Y6J6ss119hy54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzOhPOqjqbyShKdAjR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKGAqYvZ3lbHJDIaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQEEeH8V8PDf2BO594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyB0hTWRzYuiJJKPCJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]