Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now that their creation is completed and fuse lit, these AI designers coming out…
ytc_UgxMudJxZ…
G
Ai will destroy its self in one pile of plastic and wires this model you talk ab…
ytc_UgyySyF3a…
G
Today I simply and neutrally told ChatGPT what Trump and his gouverment did so f…
ytc_UgwTHbywM…
G
@tosca... Right, hence why AI is supplementary in more or less all fields right …
ytr_UgxX6Aflt…
G
I have colleagues at various frontier AI Labs and without disclosing any sensiti…
rdc_o87ckxa
G
I feel like ai will never truly meet the standard of looking exactly like real a…
ytr_UgzH7R9Nc…
G
oh wow shocking ... now we truly have a proof that AI is alive ...…
ytc_UgwDp-ODU…
G
I'm a software eng in my mid 30s not yet able to retire. I totally feel this... …
rdc_oi0l7fo
Comment
AI cant even count or do basic math reliably, it makes stupid mistakes all the time, so to actually believe AI could destroy humanity is a far stretch, however the actual way AI will destroy humanity is by providing bad advice and idiot humans listening to it and dying by misadventure, not quite the hollywood stories we are made to believe will happen!!!
youtube
AI Governance
2025-08-03T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz5ulhgGhXrVewL03d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyX72GBfjzeGr6spHR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw7cT778S7FyfAM4WJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxUNQXRxRk5lzoR--14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxN8krREGN2ChW-xX94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx1U4UuWpaXEVm_mxx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw2d1QEgjY4SxXX0uR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzWaFX2Fkkos7ggBpV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwqJdhxDZNa6D9UV6N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz60YI2lkBvNaOkmYd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]