Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"cool, now give me the gun back"
*robot starts pointing the gun towards you*
R…
ytc_UgxndHwtr…
G
Hope it happens. I’m sick of the governments and Matrix making us slaves. Maybe …
ytc_UgzMFIMat…
G
Cool, but there’s one thing no one mentions in any AI podcast. All our tech dies…
ytc_UgzoHjgIJ…
G
Its really silly to think that AI will result in less but higher paid jobs. Obvi…
ytc_UgyYafO8z…
G
If we keep pushing competitive values into systems and documentation that they'r…
ytc_Ugzs4fUmt…
G
The thing is that the prompt for the original image is way longer than the one u…
ytc_UgxL_fMJC…
G
Today, predictive policing programs are currently used by the police departments…
ytc_UgxULcNgD…
G
I guess the last step is learning to fight for your rights and to force politici…
ytr_UgwdYRsmr…
Comment
In the recent horrific accident at Laguardia where a landing plane and a fire-engine collided,
could AI have prevented the accident? Obviously, that's a rhetorical question. AI had no idea
what was taking place. AI is everywhere, where was AI, where? Apparently not at the airport.
AI could not predict, given a certain combination of human activity, with all the internet
connected computing, AI did not sense any danger. Is this a sign of danger to come, or
assurance that AI is not in control of humanity?
youtube
AI Governance
2026-03-24T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzOz_OAxTm293lV2Dp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIjutQo5KghuvNuS14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxqdY4XaZaan_Ks0fR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBxwT_4hmKA_whN3h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxSPnN_foec5bjgkGN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyORz9bu0EokOVmR4B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3KrbraeR-QWay-ah4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyYw9bUdFFEDjzLJLN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOTSaEiMs_jEbLtS14AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytxbnpFD83oc2o3314AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]