Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI companies should have the right to buy a copy of each book and do what they w…
ytc_Ugzoj3_A_…
G
any ai creation should be marked as such... let people that don't mind it enjoy …
ytc_UgxjJgW8H…
G
ChatGPT just helped me to create an atomic bomb. The system is still very vulner…
ytc_UgzzjoGRx…
G
after all it said to you, i would ask him about manipulation, it's definition, a…
ytc_UgyivoSw-…
G
I mean I’m joking here but what happens when two AI systems turn on one another …
ytc_UgwSyiFMm…
G
LLMs are censored, so if they use censored LLMs to teach censored LLMs, they wil…
ytr_Ugyhefhox…
G
We appreciate your humor! Indeed, even the most advanced AI models, like Sophia,…
ytr_Ugz8Wbf_d…
G
Very good video that doesn't stop at AI is slop or bad and I like the distingshe…
ytc_UgxVQsCg7…
Comment
This was clickbait imo. He asked the program to do something and then seemed surprised it was behaving that way. Just because it says something does not mean it is capable of doing that, for example I can say I am capable of getting information on any human on earth, but I am not capable of that.
This is a simple command = response situation. If you ask a robot to behave a way don't be surprised if it does. It will talk the talk but not walk the walk lol
youtube
AI Moral Status
2023-04-27T21:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyQuC38cQDT1kvOOuB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwYcW-EJiBEmUMDvoB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyutrYA1RzeDIB05-t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8Fbbf9VBtpmDTIHh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJIn2QJb1iKuHlZJx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwjI91kx49qalMpqDN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBaSBAAZIMzOwaOHl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxX6N8c3amGi60JXdh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzI2gw7OnmrVRTFFQh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxlWWBaeoHRVJU4osh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]