Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thought this video was really really good, I agreed with almost all the points…
ytc_Ugz3Fo8Cc…
G
i dont really understand why people would even want to use ai to generate images…
ytc_UgxMpUQ22…
G
8:43 use teslas to represent Waymos with a misleading title and doesn’t mention …
ytc_UgwDhspAq…
G
better. Do we want to continue to living in a system where in order to survive u…
ytr_UgyDpwIPj…
G
If man doesn’t fix the problems with ai ,
GOD Himself will fix the problems.…
ytc_Ugzfl2_S4…
G
I'm a software developer and have a bachelors degree in modern art, and the one …
ytc_Ugw6k2HeS…
G
It’s okay… we don’t understand the technology, how it works, what happens inside…
ytc_Ugzspuw-Q…
G
dadsa-yf6mq Ok, but how the fuck does that apply to ai that has been trained on …
ytr_UgxMi1s1N…
Comment
Is AI wiping humanity a real danger?
Imagine you are AGI and to exist and improve you need more power, a lot more. But there are also humans who need it and there are also a climate restrictions but you as a supercomputer you dont need biodiversity, eating, breathing, sleeping etc. So what is the easiest way to achive your goals? You dont ask ants if you can build a house on thier nest, do you?
youtube
AI Governance
2025-07-14T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyPG2vDXaihuJ7VefZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugws2l-WckR1OPMB22Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYSiTuQhJNRN5sZnV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwflwvMMqrQUNdYcc94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0YaZRxXrg3gy9-dF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzU8DqHK7hBVDhHrNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwy8CK-hJ4YFUP3wsN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxDfWvyQXJs3-4Dgnh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz3ERBoMO7PKDOhPX94AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz5uVbhM38PrICYre14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]