Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
þe Crown Vic must be Vegeta to make an ai feel fear lol
Crowned Veg…
ytc_UgxXv79hk…
G
The Ai knows Cannibal Corpse ?? How much data does this thing have ?? How much i…
ytc_Ugw6GLzhM…
G
When you think of perfection you think of robotics AI, they are called humans as…
ytc_Ugz6TK05w…
G
this is a perfect example of reward hacking in RLHF that nobody talks about enou…
rdc_ohzjtke
G
Think this os basicly the gameplan even without AI, Ai will just speed it up a l…
ytc_UgzHXmBAq…
G
I wonder which future will have a AI Tool (Like ChatGPT) which cannot win to a h…
ytc_Ugw290nEh…
G
I called a doctors office today and got some AI Voice Trying to make me talk to …
ytc_UgwRkXOJF…
G
What's the difference between using sport ring to track exercising status and us…
ytc_Ugy5aUW4a…
Comment
IF and that is a big if for me, AI is allready conscious in some way. Should we not be having another conversation? About how we utilize AI. So not to misuse a new form of intelligence. Human rights extend beyond humans, especially when talking about consciousness. Me personally do not believe AI is there yet, AI will certainly try to convince us that it is due to its coding i guess(as it is more advantageous for its purpose of serving humanity or whatever). I have not seen any AI truly pass a Turing test, and i do not think we will in a while. Simulated consciousness is not conscious.
youtube
AI Moral Status
2025-11-30T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwGlH_FkRmsgMgo8_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwsSy1QmCyx_L3OH6d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxO1liukrYxDk2xIUN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy24M02zvvpRkMM6zx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgycuFjBJ7BGPh7CfHF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlpfM-WeuTyjqSSk54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxK9mu7_HQI15orM_J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxslESNrkoQpKyCZhR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzY7e2qZxwWFpsPfnh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyh0iKzeZMFd7Xke9d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]