Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's extremely dangerous if it is autonomous and smarter than us, and that is ex…
ytr_UgzWn1BQ_…
G
@johnbrown4682 also you cannot diffentiate between whether if it was a human or …
ytr_UgwyxhmTA…
G
I needed that laugh. Its hard trying to do commission work when people can just …
ytc_UgzDKvwQ4…
G
AI has been up and running since 1990 and recently just got a promotion to AGI A…
ytc_Ugz0RGMzk…
G
If you train an AI on every creative thought from humans, it will recite them al…
ytc_UgxHu__-L…
G
AI art is art. People being scared of AI art sound the same as the people who we…
ytc_UgxmF8RMk…
G
Sad... really sad. The point is not that A.I. is automatically dangerous. You ca…
ytc_UgxZlKi1B…
G
I’m year 1 CS I’ve literally never asked AI to write for me. Dude it’s not hard …
ytc_UgxQr38Q-…
Comment
The government should keep artificial intelligence information at the Pentagon level. It should not be for public use and very few applications of industrial usage.
In my humble opinion, the nuclear regulatory commission could use it the most in operations to disassemble and dismantle nuclear warheads.
youtube
2023-04-10T01:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxfZCPXJ94FhI-qw-94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxK207HWLF3_AnRMRd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJRR5CYCvW-87rnzd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz-jbXGbAdtsAxjkQp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyMET2j5EMR2BB6nRV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwAoud5_1hsKat7tih4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5YPWspmBkUIt_Gdh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwPX8NswHE7DB8iHD14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyxHBumP_FWTGQ1Q9R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzvfO5v8o9n-W_Ao754AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}]