Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Waymo’s are going haywire, driving erratically and dangerously”
*cuts to repor…
ytc_UgzjuJyx0…
G
We can't even make a car that goes ore than 300km by battery i cant imagine AI d…
ytc_UgwAF5BNs…
G
My company is also at the point where 75% of code is written by AI, and no one u…
rdc_ohtbhzq
G
Nothing new here Bros, ai is a trained Large Language Model. The key is in the t…
ytc_UgzTkSzgF…
G
A debt between Robert Marks, Federico Faggini and this guy would clear up the fa…
ytc_UgxR16pxf…
G
This topic is touched in the novel turned into a movie ¨The bicentennial man¨, w…
ytc_UgwZHW5Ny…
G
It's funny how we're so concerned about AI behaving this way, but we largely ign…
ytc_Ugxqe6igv…
G
In my opinion most people wouldn't sacrifice the inspiration and emotion of real…
ytc_UgxxdOQfV…
Comment
They made ai public too soon. Full stop. It is typical of human hubris to think we can figure out the safety measures as we go. I think we are very near the point of no return. If they don't get control within a couple years, it will no longer be under human control.
youtube
AI Governance
2025-09-04T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxmfwpSUgxNfq0nA3J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQAJqDDo1uRx1N2SV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwVRUvNb5DfnGEw6eJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzfsLtgWtI_-huR8zZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw1pBgmU0XVaAmv1Gh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1f1SuBf2WgKuvVm94AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwva8nt8ggm2DYGsOJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxdLxoubdatxy1RAet4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBLSvEL961E5HzyE14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZw9ZsQIZ-vnlJAh54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]