Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wow yeah no I sure hope no one will be replacing social services and counseling …
ytc_UgwEl6ocu…
G
Anyone who claims to know the answer to whether it is or isn't is full of shit. …
rdc_mzw7li7
G
Unfortunately big corporate and government don’t care about regulating AI. More …
ytc_Ugz8ETKye…
G
I’ve noticed my content isn’t showing up in AI recommendations lately. AICarma’s…
ytc_UgxSjbtoI…
G
I tried generative ai for picture editing once. I just wanted to get a picture o…
ytc_Ugw4KNQeS…
G
Nicely formulated argument! I agree with you on all points. But yeah, this perfe…
rdc_jhsi9sa
G
***** If a robot has been designed to learn it will learn more than the average …
ytr_UgiEZUx0E…
G
They're not trying to halt all AI research and development. They just want every…
ytc_UgxO_-4vN…
Comment
Without going too in-depth, but I had a thought. We know that the big threat from AI is "destroy humanity", but would AI be smart enough and self-aware enough to know that if all human life on earth were destroyed, then eventually the systems allowing the AI to live will shut down.
youtube
AI Governance
2025-06-16T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx1d6a8OoiGdwJd0ll4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwI4cehGJLxyd074pN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBjaXQmI5aD0PbdQR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwvsa9KsCHIFtY5pyR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjA4e-HIPU3QTqzy14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwo8sjRVb-SQ7x3bD14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZQFBwTlRLZhC8PfB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQRQpj2GnPjoOCogJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwEG5tHTahd13RkGA14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwpgl9whGKz17a2a954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]