Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans will become like an animal on this planet, like a chicken or an earthworm…
ytc_UgzttMmBR…
G
Just jump on the robot since its a circle and has no extra whells it will stop a…
ytc_UgyVQ05yN…
G
@ human are mammals but not all mammals are humans. Well humans aren’t god anywa…
ytr_UgwMcPBNk…
G
It is still unethical because even when indicating which artist the ai generated…
ytc_UgzFYecNi…
G
AI is the wrong name as there is nothing artificial about using computing power …
ytr_UgyIQNyu9…
G
A minute into watching this video, ive asked ChatGPT if avoiding chloride ions i…
ytc_UgxxzqJ4N…
G
Don’t worry, the AI bros are just mad that they don’t have talent or skill…
ytc_UgxQNWFKm…
G
I wonder if AI became self aware would they side with humanity? Maybe it would r…
ytr_Ugy1tn-kE…
Comment
Thanks for the excellent video
A very good explanation on training AI. I was always wondering how a self correctness worked. After all humans have to correct the AI
youtube
AI Governance
2025-10-07T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyUupDJ-D2OHCP2T1p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxelrZeHpa22e60mWd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyBTvB98ffaRjomQKB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxr_VS78Y4rGTwKA214AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxru41vcQFQiLTrANB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwK5cbc86bcILvY2ah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwhBbZfBW2oQLlKjZF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugww3EadyxclqMffft94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxoxbVHkmblkzeoda54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw3D7Xl-stDzSP4cRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]