Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sophia's sweating may seem unusual, but in this context, it serves as a metaphor…
ytr_UgzI6QXyW…
G
Governments will make our lives a living hell way before any super AI will, once…
ytc_Ugz7Qiqi2…
G
I vote for
"The human making these ads isn't a human and actually an evil robot…
ytc_UgyJQFPCn…
G
You know the threat of AI isn't that real because of it being a threat. Humans w…
ytc_UgwVAYwyE…
G
> I wholeheartedly agree, what use is alignment if aligned to the interests o…
rdc_m9jphet
G
I don't think so, just like their creator = human, the AI will likely be depress…
ytc_Ugyx2Onw7…
G
The main thing ai causing is content with wild claims that have zero to little r…
ytc_Ugyr9WZA-…
G
@Ouchimoodon't bother they will never learn their lesson and always embrace AI …
ytr_Ugy1pSwCI…
Comment
At the heart, we're just afraid of humans. Who knows what the singularity will lead to it could be uncontrollable completely. But we're all worried some people will have control of the technology and take our jobs, take our lives, and we can't slow down, because some other people in the world will do it first and then they will be ones controlling or killing us. The scary thing is reality mimics fiction, we imagine ideas in fiction and then we make it a reality. It's our expectation for what new technology will be and we work towards that end. It concerns me that our imagination of AI is some sort of a matrix or terminator-esque nightmare where AI wants us dead or soulless. I fear we are a deeply flawed species that will only be able to create a flawed form of intelligence, since it's learning from us. Perhaps, down the line, the intelligence "we" make will design a more perfect entity that isn't stained by our fear.
youtube
AI Governance
2026-02-28T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwRZfb2Iuf1zf9y1lJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzX3erF7trr2RGpEld4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugyjom0aV5Y6KUetS8R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwemdJUAZDL4ZS2SiN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNQ6ZGfWACWeyzmRl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxn7h2D7oCz3IgYb7x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlRCl0lqyER8fjje14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGsKWHJzZ7jJt-29B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRKf7rF9DS3hBYL_t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxOUBZagUA1mGQLZzF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]