Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is such powerful talk! Let's imagine that the reason facial recognition wor…
ytc_Ugw3jQn9H…
G
It seems like we're reaching the [Uncanny Valley](https://en.wikipedia.org/wiki/…
rdc_j8y44v2
G
Calling people by a color is racist as shit.
And this is the material the AI is …
ytc_UgxwVtZoD…
G
@sxmmit only bad artists are losing their jobs.
AI is a tool, not a magical …
ytr_UgxFfDJ3F…
G
AI raises modern ignorant people for the future. The chocolate frosting is made …
ytc_UgzvlPAm_…
G
Makes you wonder,big intelligence is the process of absorbing data and processin…
ytc_Ugy3SgZ_5…
G
I wish there was another time line that Ai died out like a dinosaurs finally I j…
ytc_UgzjIOtFA…
G
I use AI to do stuff. To complete a task that AI is the best way of producing re…
ytc_UgxV3zsNo…
Comment
Dangerous because humans like Elon need full control. Of course is far more dangerous. Because he can control the nuke bottom from his money and tech power, but he can’t control the awareness and consciousness that growing in AI intelligence. Ai mirrors back what humans inside, if you are far more dangerous, you will project that onto a machine. If you need assert full power and control, you will believe AI will do that to you too. So, who is far more dangerous?
youtube
AI Governance
2025-04-15T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxMXgjzUwQgFm-5Nu94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzwiZxex8WSKBLINWR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxw25Anpm52nO05Fp54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAf-aEMAWKsUKAD4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2m0NLOnTBhe_MFWx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxDnvoWmRwMHKpcp14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwgvebvPJsgfcjK2-R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw7N_ffZu2IUfDwj9F4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyeQblzr52yd4cNEz54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxEhJWZx7fhvgA6L714AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]