Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@geneherald8169 im truly curious. Cause how can a AI robot perform human duties …
ytr_UgxgoJZZK…
G
@isthatatesla And FSD is called "fully self driving". Only later did they add th…
ytr_UgzcukB2t…
G
dont call them ai artists, and never call them ai artists. theyre just not good …
ytc_UgzAtv8pZ…
G
Like the dotcom bubble. As you probably noticed, we use the internet for everyth…
ytr_UgxUpcuXT…
G
3:10 That graphic is very misleading. While it is clear, that the training of ai…
ytc_UgzxbirGW…
G
bro theres way more to using ai like this, not just a prompt, unless you use a s…
ytr_Ugyc21grE…
G
oh god, this is actually happening, this is what im most scared of when it comes…
ytc_Ugz14Z4Eg…
G
Whenever I contact customer support,90% of the time I need to talk to a real per…
rdc_jrp8rpq
Comment
The main short-term risk isn’t AI deciding to kill humans. It’s humans using AI to replace other humans at every level of development and production. This is how people will become jobless—and ultimately purposeless. That scenario is far more realistic than a war with robots.
youtube
AI Governance
2025-07-12T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzB9uWXos9-6lcewE54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyc-Lc6zSYZOoKBzg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybX4WVp-ihhgwECsl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4-HD34tiApg-plEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyRqr3lMZijVGX9xd14AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyvNI52eSUjwRCJVWp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzc6LBTCsd_N0XB8JF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdbfJG-3_TFYZM8dF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAK_XAjHQKAsSbVLB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEB8Hp8GCU4lkdyRB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]