Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He says there is no way for him to comprehend or envision the future and there i…
ytc_UgxMK5OXL…
G
What your were explaining about Echo and using another AI to reach the outer int…
ytc_Ugw5kT3-Q…
G
We’re not there yet. 😂 Every car would need to be self-driving and controlled on…
ytc_UgwCp-qqg…
G
We are most likely to find a way to make our brains, synapses and neurons alike,…
ytc_UgwtfZW4a…
G
I don't think this is real. Nobody would hand a loaded gun to a robot.…
ytc_UgxO8b4OX…
G
AI pose no risk to human survival alone, but if someone program them to wipe us.…
ytc_Ugymf1lyk…
G
Man, isn't every skill comes from learning and practice? 😅 The fuk is he talking…
ytc_Ugzy02OWO…
G
The ones defending LLMs the most (AI does not exist) are those without any skill…
ytc_Ugz8I_GCH…
Comment
this is so dumb, so if it hides it how do you know that it knows that you know :P if you know, then its not hiding it, if you dont know, you cannot determine it either way :) we need to define what thinking is, if you say that thinking is pattern recognition, planning, reasoning, problem solving, then sure, IA can think :) nothing special here, if you really mean reflective thinking which means observing one's own thoughts, explaining reasoning, evaluating beliefs, this means you have to be self aware :) and AI cannot do that we know it from research how AI "thinks", and it has NOTHING to do with it explanation how it came to a conclusion :) this means its not real thinking like humans do.
youtube
AI Moral Status
2026-03-09T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx5L1jwKo0bPxcInER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwU3dIa2hekvGNyLxV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxStcDnh3T07An_bll4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwyhEauhTlJIarKoad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCqT0zCc80ws0TosZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwTH9MZeutrtaoMAY54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzm-FnPFbOF8szIHah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcDXS1BiBFO4AP-354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxXmID-09-pstwqjCl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxELedGIPn1Gx1KpbN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]