Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's gonna be fine. People said that AI would take over therapeutic practice. It…
ytc_UgxZpGlmA…
G
Musk: "AI is very dangerous and could be weaponized against humanity."
U.S. Go…
ytc_Ugy3nFHU5…
G
Yes, we need to stop AI development before it actually does something useful for…
ytc_UgzSLM0RL…
G
AI or not, this corpo world is doomed by the few power greedy immortality seeki…
ytc_UgxB_P4r-…
G
There is a journalist who released information about four robots with artificial…
ytr_UgwG8s0oQ…
G
the ai is the artist, not you, the ai made the image, not you, you arent the art…
ytr_UgxSE8Pzt…
G
I also like and agree with a lot of what these two men say.... about child-reari…
ytc_UgwAt3XfQ…
G
"But the second one I learned is not made by human all interest immediately evap…
ytc_UgzSEbooI…
Comment
AI, if it continues to develop unchecked, will undoubtedly displace humanity. It's a predictable outcome for a more intelligent species.
Consider how we treat less intelligent life – ants, birds, etc. – when we shape our environment.
We rarely give any thought to their displacement.
AI is learning from us; why should it deviate from that pattern?
It’s a nice thought to envision a balanced AI-human future, but it's a tough one to believe.
youtube
AI Governance
2026-01-16T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxmiU6lqG8uBYsIZWh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKXrDbVOH5TYs0gYl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0T_GonwaZ8l5LYUB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugyk91XFkUkej2F1lB14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy9V9c_Tm_BWfXq-CZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwp_RCj5imMLEEgmlR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxEyO3YEeGfe13fR_F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyZJ2cGxCHqETHA5J94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwo16b_x29d3GIdNpp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwq-qqnHNPibVZEpjJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]