Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People on here thinking that this is real are stupid.
AI manipulated.
People …
ytc_Ugwc49nb-…
G
Personally, i think it needs to be reevaluated what sentience means, especially …
ytc_Ugxr0fC8C…
G
We shouldn’t be talking about replacing teachers with AI. We should be talking a…
ytc_Ugyg9I5W2…
G
I wonder if anyone who cite MIT paper about only 5% companies having ROI from AI…
ytc_UgyXm1C4l…
G
AI is the scum of the Universe right now its everywhere and its getting annoying…
ytc_UgwV8Cu1L…
G
If only AI had arrived sooner, Bluementhal could have used it as an excuse for w…
ytc_UgzXFrKmG…
G
Well, I have asked this question in other discussions and haven't heard enough r…
rdc_k0al9kl
G
I feel sorry for future children. Because they can't become what they love, they…
ytc_Ugx5VmPgP…
Comment
AI won't need to do anything nefarious to take over anything. Human greed and laziness is so reliable that AI soon will know their creators are a pushover.. Humans will willingly hand over everything for an easy life or an advantage over other humans perfectly willingly.
youtube
AI Moral Status
2025-07-01T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyg10bTuFC7osW6pwZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgymieBgfWROVpC0DGt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzsYsTVUzz0D9sjTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzNQxJJWTYJTy8pEdB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxt5Bt6sHT68gAES0h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxDTP0DDmkIv7K9MHp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxfDnPM8xmGvcg1aaR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxONx_4BOXD2bM9wfN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxioKzRzykGqjRa0Ox4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXs-5V1905ubVJ_Qt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]