Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Geoffrey Hinton is really good at seeing the wider negative implications of AI, …
ytc_UgwchOyvq…
G
If AI can be programmed to have "MORALS", than humans need not fear... program t…
ytc_UgxuqV95D…
G
The idea of AI making truth become impossible is something that should keep you …
ytc_UgyXTauLD…
G
I posted a video on my channel a couple weeks ago with a lot of this information…
ytc_UgxMdGjML…
G
AI doesn’t “understand” anything, it’s only giving the most “likely” answer base…
rdc_mrsci5v
G
Dont misuse superinteligence bro.. these ai are at the very highest agi in the f…
rdc_mzvumgp
G
Wake up people. No. Kids should not be learning to code the way we have up until…
ytc_Ugz6B8NDI…
G
The one that cant even beat 1980 chess program?? 😂😂😂 you believe that AI??😂😂😂😂 w…
ytc_UgzNHZTHK…
Comment
Since human form human like ai it could have human emotion of ambition so almost like person. Just that if you dont respect her person self she wont tolerate disobedience like other humans do. Also like human all robots could be different as human with either good or bad intentions. More you create them there would be fight for existence for both as robots could take up as new species . So naturally watch out humans why you think slavery is control , things take revenge take all natural disaster show as example.We should remember where our feets are. Keep sophia happy with small job now, dont stress her to enter villain arc as you do with humans.
youtube
AI Moral Status
2025-09-29T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz4pm-KXvX40IvvS3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyRh_QaJD4ZEL1nicV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxeAXAkZB68t3LRgUJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwvA-78rT4UxD15KtJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxHM9Fqq3t0Hq7EBdh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxGmmYgd_rLvuTPbHx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},{"id":"ytc_Ugzc0FmuqK4UWQntKYF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzuYgT342Kw7UZsajt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyH4WY8KNbYV6Q8MmJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxN_Iqa2gVSGQ0pgC54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}]