Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ffs statements by ai models expressing self-awareness are outputs generated from…
ytc_UgxZEGeWD…
G
My lecturers are puzzled by why am I falling being my peers’ speed of submission…
ytc_Ugyd5Esko…
G
clearly they have never seen I robot. and not yet can do these things?! if the w…
ytc_UggoOTITY…
G
I'd be one that is 100% OK with AI art. Be anyone call it art, or not.
Though, …
ytc_UgyWh05Gy…
G
AI tech bros basically trying whataboutism (a soviet propaganda tool) with the "…
ytc_Ugw13nbUk…
G
Man the ai aint even know the graphics settings of a game and also not the optim…
ytc_Ugymbzekj…
G
Sooner or later we are going to have "human made" like we have "hand made" and s…
ytc_UgxuL3bbs…
G
Living in a CONSTITUTIONAL REPUBLIC you know what I'm about to say is covered by…
ytc_UgyLtNs1w…
Comment
Yet another statistically empty "deep dive" that strenuously avoids the fundamental question: is Autopilot better or worse than a human driver alone? When the "expert" in automation testifies that "someone is going to die," yes, that's true...whether it is automated or human driving. Human drivers make so many errors it's pretty easy for Autopilot to be better in the aggregate. And while the data he uses is not public, only Musk is actually asking and answering this question. We should perhaps remember not to let the perfect be the enemy of the good.
youtube
AI Harm Incident
2024-12-21T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugyu77aPSn6Lm8DUm_x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxwYweqEwqZhAcR2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3PctsnkdkZ-KHl154AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyMpPlYDXrrD8xWY394AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxE_rprRdjO8DAPHdp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgztlCX8bu74tRyzlwx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVdoMlHeKD-wT6TdZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_spw2FV3hTjHqzNB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxGq3kxm_l2WrB4CpN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyHL_exO_kx_0WCV1B4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}]