Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If no laws are put into place to balance between AI work and human survival, the…
ytc_UgyiaMWTc…
G
To constantly add new meaning to the same information. And ubiquitously reason t…
ytc_UgyyuANT-…
G
Shad's a loser who has stagnated in his art and instead of trying to get better …
ytc_Ugx2koC_4…
G
I can tell an artist at a convention used AI because their work looks like an un…
ytc_Ugx9jPsIw…
G
OpenAI employees were probably the last ones to expect that AI would make their …
rdc_m9h5206
G
bro I want to be a artist/designer/writer later in life and if a can’t bc of ai,…
ytc_Ugyg5And0…
G
If the majority of the people in the know of developing a technology all agree t…
ytc_Ugwlx12ur…
G
You guys are lost if you think AI is or can be sentient. Electrons no matter how…
ytc_Ugzo9-DDL…
Comment
Karen Hao is incredibly right, but at the same time incredibly wrong. She underestimate the potential exponential grow of A.I. and there is one law I don't like, If a world power develops revolutionary weapons or artificial intelligence, other countries have no choice but to follow. Saying otherwise is a completely blind attitude, or the proof that you are working for one of those world power.
youtube
2026-04-15T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugztrwz-8UBoUledn414AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxf3hs0W2F4QYiiYcB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx2uSM6lrp-LJg0QIt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwtd1mB6imMHKwOig94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzXO7prxWfydNrXCIt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwjwNOoqg9JO0XvWEF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOmYcN5658UcS5soF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzXb5zv0t3kpMNTDgx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxVs7Nz1ezaDyqhlKl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxGesliM3uMv15PuQN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]