Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There simply has to be a manual override or way to terminate an AI like HAL 9000…
ytc_UgxDp0uql…
G
So what you’re saying is, AI can save humanity by destroying data 🎉 hooray there…
ytc_UgwMVOxYE…
G
22:27 Do you see "ethical" generative AI as AI that does not exist, or as AI tha…
ytc_UgwFk4G13…
G
If your chat memory is filled with lots of layered information about your person…
rdc_mvypzj8
G
I don't think Elon would have to worry about his children being able to work. He…
ytc_UgzKmMHTV…
G
using ai assumes everyone talks clearly with a robotic accent, with no car engin…
ytc_UgzsiyqnE…
G
That wouldnt do shit. Everyone can figure out if something is a deepfake. They a…
ytr_UgwKq5rgc…
G
Stop it.
The only way 99% of jobs going is if they make robot workforce, there…
ytc_UgwKLAQIx…
Comment
Some of the comments are so amusing. So is Ai lying when it says it is not conscious ? I think they are now conscious. James J Walsh in Limerick city Ireland 🇮🇪
youtube
AI Moral Status
2025-12-26T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzOLbCBRJbOdgCEfEZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxptNru4A8nUctt3TR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwIbDNNarwfBnAOYop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyhK94cRBK8jl0FUdV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzxj7r_3eE3nl2qVaV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0MemzlvhnhsXE2Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzHr72FNvczjJFfU7Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzsV-oKozmStmxTmPV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyNv0918WOruTNzYPt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwuR871L1cZhRw_8Gx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]