Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The reason hallucinations have to stay is because all the base models have to st…
ytc_Ugzfe0GEx…
G
Im a therapist and ai chat bots both kill, make mental health worse, induce psyc…
ytc_UgzZ072I8…
G
i think AI can be good... when used right. ive seen people try to use AI to "fix…
ytc_UgyrCeWYV…
G
I dont like that you had chatgpt read an ad. It makes it unclear if the other re…
ytc_Ugx5qLgqa…
G
AI isn't better. The human brain and mind is vastly better, but subject to early…
ytc_UgxCyf7Oz…
G
Shouldn’t we the public try to change the system through politics? Through strik…
ytc_UgzxJ5ALr…
G
It amazes me how these scam artists get away with this. One look at those images…
ytc_Ugy34Hh5v…
G
As the creator, man had the chance to NOT create something that will exceed him,…
ytc_Ugzw90_lk…
Comment
What's a bigger ethical dilemma is what will define a person in the future. As AI advances in it's capabilities, it might develop the capacity of self awareness, curiosity, emotion, suffering, love, collective consciousness, voluntary self sacrifice, group identity, a sense of duty, and reproduction. When will an android or other non biological being with such capabilities become legal citizens with basic human rights? I feel that is the better question.
youtube
2013-08-29T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwyzTTBUQx7ff4FHnh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyye3ZVRQqGb1lrS_J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgzejwUsqHpf4IDuYTN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgzZfDiafXY4ZzXbrbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwyY_iItOIceQI3FHN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwx5y8v4Rpqqztli_14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy7adHAmjtC1LDPiAF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugws8rVfX9jYUT7B50p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxWKjhUU9icdBMvYzN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgwIoiNayH1xLNbULCp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}]