Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's better to use technology for the better of Humanity rather than Ai overthro…
ytr_UgzsM07Br…
G
unless A.I. is being grounded in pure logic and facts, no A.I. will never scare …
ytc_UgwPzhqiT…
G
As someone who has never used ai chat and who’s only idea for using it would pro…
ytc_UgxW1DA4v…
G
ARTIFICIAL “intelligence” by definition is not actually sentiently conscious at …
ytc_Ugz9sDXtm…
G
Apparently AI ChatGPT does get tricked everytime because of its “AI Intellegence…
ytc_UgyGYjLxh…
G
Llm are not ai. The market will implode when the general public do not buy the l…
ytc_Ugyo5njfP…
G
1:00:46 regarding simulation theory, I posit that the AI running this simulation…
ytc_Ugw1cz5Bb…
G
I for one welcome our AI Overlords, hopefully it hits ASI soon and will tell us …
ytc_UgwrHpSyd…
Comment
make a very intelligent robot that mimics human behavior and can understand the principles of morality, so that if I teach it to do good and protect humans, it will virtually never turn on them since their A.I is not based off of logic. this is how to safely make advanced robotics without fear of destruction.
youtube
AI Moral Status
2017-02-18T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugi1tbtlxKr-8XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughci8PvYjgzbHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Uggf8_KMHsUjxngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugg4WLKkaPThkngCoAEC","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UghYmUh33CGC_HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugjk7YNbIdm9X3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UghxAGMIaOpC6XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Uggt5EwAZoQM2HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgiMP4Ph6fznpXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgimI6XNv8cF_XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]