Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well kinda too soon to bad AI weapons all together. Should be treated like restr…
rdc_dwwiw4j
G
I wonder if the natural warm air from that country would play some part in causi…
rdc_eue4ta8
G
In the anime Astroboy he gets created by the doctor Umatarō Tenma because his so…
ytc_UghMEOjSV…
G
I am not familiar with the exact AI in question, but I know a lot of schools AI …
ytc_UgyfzrxtS…
G
maybe the AI just learns. you don't know it's data set. literally every AI I've …
ytc_UgyFG5teY…
G
Your version isn't the authentic version of ChatGPT. Its just telling you what t…
rdc_n4oxmd8
G
Tbh we know consciousness weighs 21 grams of light. So if you want to know if th…
ytc_Ugwvim-rF…
G
So we are expected to believe that AI just randomly told a dude to die? I need t…
ytc_Ugy53B3Wv…
Comment
My question is, can we teach A.I. to feel pleasure, have desire, and have purpose, besides just surviving and pasting on their mind, would they, being the superior being, ever think about their purpose and why they keep doing this cycle? They have nothing inside of them preprogrammed to survive, even when a person ha ngs themselves, their instincts kicks in to survive. Would we be able to teach that?
youtube
AI Governance
2025-07-10T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyib7qeKGjZ5ai-dJF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2Ychqih4ZyA8N3ZV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcOqq2rbrRgeJdifB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQmIFSn0hA_oZ7q394AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugynb-6glZy-EbuyyaJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwqNFM8fQpHkEbBSyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-eI0EWUqEdVLNAuR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgyI8aRoyTwJphnh12V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7fTyot6nGtVH5qWd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx0j72VPZlDn98OPzp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]