Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t care about the divide lol.
There is a reason 20% of the jobs have a dec…
rdc_nnudm2a
G
Honestly culture as a whole is really shallow. Men go online and talk about dati…
rdc_oe2q5ak
G
THE ZOMBIE TRAINER No, because when a human is knocked unconscious (most of the …
ytr_UggXH575m…
G
I wouldn’t be surprised if the property management company got a kickback from O…
ytc_UgxF5Ani7…
G
Intelligence might be replicable in a computing system/machine, eg pattern recog…
ytc_UgyYMV4Xg…
G
Generative AI: Expediting the Climate Change Apocalypse by cutting down forests …
ytc_UgwFldBdS…
G
UBI wont happen, there is always room in the rare earth mines.
We all will be wo…
ytc_UgxBa_qB1…
G
We raise our kids as parents to have a certain set of values. They learn moralit…
ytc_UgwMYpAVs…
Comment
The future of AI is not a predetermined outcome but a collective construction. While the technological momentum is undeniable, the societal trajectory of AI remains within human agency. By proactively addressing the ethical dilemmas, mitigating the risks, and strategically harnessing its immense potential, humanity can steer AI towards a future where it serves as a powerful force for progress, equity, and human flourishing. The time for decisive and collaborative action is now, to ensure that the promise of AI outweighs its peril.
youtube
AI Governance
2025-09-04T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzvzhoV4Oty4-tcpnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeTA7O-KjP3M0EzcF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNrDpRHoxXpuEdzpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLDi4I5FZIG2ukBJV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw8OvSFi_qGHTBifbt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwCuVl4oZfzu0V766V4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyyypPNmFW7uWRNbsh4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_Ugwj3aqyP4kfrQqLJWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwSE087kD9tseUUiAx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrcRlAVtOwY4Gf1yx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]