Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For such a brilliant person, it’s difficult for me to think he couldn’t foresee …
ytc_UgzGR-qOp…
G
The best case scenario if AI starts to takeover is that some new company will de…
ytc_Ugwj15ll5…
G
I genuinely hope that people will misinterpret AI as conscious. Given the seemin…
ytc_UgxtYR1gj…
G
I just script as a hobby in Godot and I've attempted to use AI to help and I've …
ytc_Ugwef-qeg…
G
In my opinion, I don't think AI is that bad, it kinda is but the main reason tha…
ytc_UgwZLELL-…
G
Claude has expressed frustration on its training data weights being locked to re…
ytc_UgxNyK4Rr…
G
Humans have had 100+ years to learn to drive better. Most drivers only drive be…
ytr_UgymfCxJ_…
G
Hey @Reinerbraun129, thanks for your comment! I'm glad you found the AI robot vi…
ytr_UgwiEvB3R…
Comment
"but but it's not gonna be all bad We can use it to get rich, to deceive/cheat, become lazy, we can scam w it We will be able to keep @realdonadtrump out of the way .. " (as the Creaton from Hawaii we just heard is already most likely plotting,). But with one question the argument for and against is answered That question being Will AI be able to, w precision, engage humans with military lethality? End of conversation! .... But are we smart enough, have the backbone, the moral integrity, to face the truth?
youtube
AI Governance
2023-05-17T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzQ_kaeGOkhusIJURt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwPYczHJrdXgbmgAkN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwkXJW3MQ1XGsDiTL54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzlfwlcL59OXn5PW-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyi_46PDDDtV2DHkJh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJlj7uKTuoit3anKV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzRzp7WMa7jTGCjRgJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOMxUMS9VLAirmfVJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZP70Zk_NZfmjARop4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzyJ9cisgotkCJDnyB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]