Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will decide that there are too many people on the world that cannot be sustai…
ytc_UgyFRlMO9…
G
If AI gets to Blade Wolf’s level and sees us talking all this shit be might be d…
ytr_UgzufzgI1…
G
Alex is really good at asking a question over again in a slightly different way …
ytc_UgxpgvOja…
G
did you miss the part where companies are already using self driving trucks betw…
ytr_UgwKFBJvj…
G
Self driving cars are allowed to crash and cause incidents, as long as it is sta…
ytr_Ugzo1vbj4…
G
I’m 33.
Discovered AI one year ago.
Since then:
– published a book on Amazon
– …
ytc_UgyCxhvo2…
G
Well, the problem is: it’s a race for everything. So, safety isn’t in the equati…
ytc_Ugxn71dq8…
G
Ask the Ai to inform from what sources made the art and send hin to court.…
ytc_UgwzDOOw0…
Comment
He may be right, but the problem is that US have a lot of enemies. If the US starts regulation, thus STOPS AI Development, then the US enemies will SURPASS US AI Development and that will be MUCH MORE FCKD UP
youtube
AI Governance
2023-04-20T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxz23WHT8grS5gkE-F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyU5EeEku9L1s09HiZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzb-CHuw8-NTs94yvh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLOtQyfVCmLyKlCcF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf7O-YMXNhyOqa3mN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzpzatw453mCkoninp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxmbpZ9b8jUmgYfKkt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxCV2Sor7Bt7CCaFph4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugwf3pmn-S3JaCnbuiZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw6kUDgfcL5sXfFXdl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]