Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Otome_chan311 If ai that was sentient became evil it would likley be our predj…
ytr_UgyiCVdSV…
G
I wouldn't go quite so far as to say hyper realistic.
More like animated waxwo…
ytc_UgyaKFYK1…
G
3:54 - 4:30
6:30
7:09
8:05-10:25 ai copying themself
11:18
11:58 ai risk
12:55 …
ytc_Ugwu6nGtG…
G
You are very naive if you think there are not government who would make it kill.…
ytr_UgzeNXtgZ…
G
THis is a known problem and is mentioned in literally every public AI before you…
ytc_UgyOcK6Jk…
G
I don't know how but I've trained all my chatbots to speak in Gen Z slang, but i…
ytc_UgyIQzxba…
G
Because MAGA doesn't plan on the SOS being the GOP candidate for president in 20…
rdc_oi3w5nl
G
We might start talking about them when the AI, unprompted, starts expressing wan…
rdc_jegcumw
Comment
True. And that is because the asteroid does not provide any other benefit. The AI systems being developed now provide information or analyze or diagnoze data better than we do. E.g. in medical areas, it is starting to become stronger in diagnostic analysis. However, similar systems might one day create a computer virus that wipes out all the computers in the world. Or one of those humanoid robots mixes up some chemical that will create the next pandemic. Who knows.
youtube
AI Governance
2026-03-15T16:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwcE55bOBxHr95Vl6h4AaABAg.AUMgGmaM7H8AUOAi0fv8e_","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytr_UgzSKdEjnC9PL1tdivh4AaABAg.AUMWOK9o4KVAUOtsfvVzDK","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwVDlusgcqyc9tD9gx4AaABAg.AVI-Xpe8uJLAVI0JRog2y_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UgwS7NskPnj8zauivUR4AaABAg.ATckouSciQ5ATcmNaqXjQE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyZbpcPazlduorNkSB4AaABAg.ATIy5bKuIRWATIyimdItSY","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz7-tca-nVR30lEzc94AaABAg.ASsL0raRLJfAUNqQnlFAPy","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugz7-tca-nVR30lEzc94AaABAg.ASsL0raRLJfAUj7j_Lc_RD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugz7-tca-nVR30lEzc94AaABAg.ASsL0raRLJfAVnLp-BFmQq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxiddK_BLuFDZDI8Y54AaABAg.ASoim4X4TqoAUM1a3CbYXF","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwfBxjx0sW5kbskYZ14AaABAg.ASTbmvyuBwaASTfzQ-r-RS","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]