Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
openai is a ponzi scheme at this point, so I don't think we have anything to wor…
ytc_Ugz6bprHJ…
G
Listing some subchapters for future reference
01:07:41 - A* Algorithm - "heuri…
ytc_UgxpPSRkA…
G
Luckily, the media & the government have been lying so brazenly since 2018 that …
ytc_UgySkJQHc…
G
It will be fine for him and his like with their private robot armies! We will ve…
ytc_UgxFCNU8_…
G
I’m not going to a hospital with AI doctors and nurses, and sure as hell not get…
ytc_UgwMwN0ru…
G
AI doomsayers are typically people who are intimately involved with AI and have …
ytc_UgzDCKE5l…
G
At long last, a response to AI that feels like a call to action rather than an e…
ytc_UgwWEridQ…
G
My dad's prior boss recently left their company to switch to a competitor and be…
ytc_Ugw7AV_r0…
Comment
You people got not stop Ai so much thanks so many have lost jobs when people ask for a future where work is not needed as much this not what they either meant they to just not to work they didn't then people ask for robots they didn't for ai because unlike robots would take as much as ai and unlike robots don't make as much mistake and also if weren't covid this wouldn't be happening the only ai you should be using are siri Alexa Bing copilot Google assistant stuff that came before 2024
youtube
AI Governance
2025-11-07T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxHRBXROSeucH5KNel4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdrrF3sZvjjCPeXxF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxDSbvomuYnGcx3aL14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyp_derX_kI3bzvnz94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwA6SAds28k5dSZhmJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzup6gJCVtMkzroyDB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmO3KdYQ9GS71N-PR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0W5jiR84pdEcVTyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCUf7etrSdoCvGlKJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxdo1_cGXAeNHV-t8R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]