Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dr. Roman Yampolskiy is obviously brilliant in the field of AI, but he's not an …
ytc_UgxnH00no…
G
THANK YOU! AI may not raise a big bad robot army but could cause a worker's revo…
ytc_Ugx_m1obU…
G
Crazy to think people give more critical thinking to robot rights when we unnece…
ytc_Ugg2mLhYU…
G
I was actually very impressed by her for almost one and a half hours, and thinki…
ytc_Ugz8v8p55…
G
To me, I find the introspection caused by it far more daunting than AI itself. A…
ytc_UgyGLivzx…
G
@ true, true. But the thing is, they wouldn’t know because it isn’t their art. A…
ytr_UgxJTOkVT…
G
We're aware of that, the point is: could those original images have been generat…
ytr_Ugw92Asy7…
G
The irony is that AI, or intelligent algorithms, will impact the white-collar se…
ytc_UgzeyR5gr…
Comment
I think one fact should be clear. Given that AI's are set to continue to advance in intelligence (the rate doesn't matter for the purposes of this), humans are going to come into conflict with AI at some point (because of course we are). It is only a matter of time. A practical conversation that no one seems to be having is how are we going to deal with the conflict?
youtube
AI Governance
2025-01-15T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyuKve0p8NacQZ3jl94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxliOdOHICKNldjUnV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRscm_LqcObDfLevF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw6hcLS-ig_2l0mKih4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzgNakNI2froR_VBJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvbW-3XGJXi6zQWkp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx82TcIE1NIjP5EFhd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzghbarzZgKxERfW_R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypFt-geNkrVXiCZKZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzgIK4NlPPKCCuuXFB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]