Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When the AI was factually correct but you have speds sitting there crying racist…
ytc_UgzVXQV1f…
G
Here guys, I vibe coded it for y’all, waymo. Just use this.
class SafeDriverPro…
rdc_nsyw08x
G
let the AI have all the jobs, physical work will always be available you need …
ytc_UgxwXVwTg…
G
14:02 "[W]e are in a bubble and a crash is imminent" _AND_ "the next AI innovati…
ytc_UgwNGRir6…
G
Science fiction writers of both literature and film have been warning humanity o…
ytc_UgyxT0St4…
G
When you ask what they ultimately hope to achieve with AI, you eventually come t…
ytc_Ugx_owKBY…
G
I think you point out the very thing that is unique in this situation - The mere…
ytr_Ugwf2zF_x…
G
Problem isn't AI itself, it's the people pushing competing with each other and n…
ytc_UgxF_8zAN…
Comment
What troubles me about this is that technology like AI seems to be evaluated through a single lens: "Will it make money". If something can generate revenue, it gets rapidly deployed into society without adequate consideration of the broader consequences or societal impact. This purely market-driven approach to technological adoption is deeply problematic, especially when even the people who are developing the technology have no idea where it's going and what the outcomes are going to be...
I want to go back to pre-social media times. There's another example of tech thrust upon us with gay abandon!
youtube
AI Governance
2025-09-04T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyZ8tMC_iqbPOFfAtl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_pxrzCH1EcQL0cxV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwk8b5su46CTSeOMfF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgybjfcIC2yLrPpEni14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyR7pTGIw_dmjOdOLB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbRDTdadWNPFQPkT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyVb5LRcxwOVS0leaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyy16gSnnDsJuigfCR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXdTycB1yy7VH_vIt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgztGwG7us49PyQQVKd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]