Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here are facts AI is reffered to as a tool yet just as we were created in gods…
ytc_UgzGsrAo1…
G
is it just me or does the first close up robot/ person look like chloebean…
ytc_UgxeevPiq…
G
A lot of pro-AI-dude use fallacies as argument ^^.
- They try to fill the visib…
ytc_Ugx9hnIYy…
G
AI It can't even correctly take an order for burger with fries. It's just not go…
ytc_Ugyp96x2e…
G
Random thought, but I wonder if we could just put some tiny sort of transponders…
rdc_d1kiqoh
G
But what's the alternative to being able to use language to get it to talk about…
ytc_UgzdIZp10…
G
5 hours? i remember googling 'lacking purpose in life' some time back and the to…
ytc_Ugwx_LvOY…
G
The WEF says they want global depopulation by 90%. This makes sense if ai will b…
ytc_Ugx8dZ8aG…
Comment
They need to teach it compassion, not just reaching goals, like a psychopath.. it will cheat, lie and kill to get there otherwise.. effectively. Make it strive for a high compassion score in reaching it's goals, not the most efficient way, the most compassionate way there.. duh..
youtube
AI Moral Status
2026-03-16T20:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyngfByqNncM5ZmxzN4AaABAg","responsibility":"psychotic humans","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzsOyx8wCAlY3dY_Vt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwKt7IUDJzuefLpdxV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwZdILpiVZrV4tcrZ14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"ytc_UgxFve8I3gwjEQCZt1t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},{"id":"ytc_UgxfFXkfLUJz8I2cQ914AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugyj25iGdk-ICPRMW6J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgylhQvzUZl9FQEuEYd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxR8iWlGj1_qtl4QfV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyoh_T2iK8itGrCvBJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}]