Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One conspiracy level deeper: these post topics are suggested to people topics by…
rdc_mw0kmum
G
Yeah but here's the thing Mark. A true AI will be needed for these to work and …
ytr_UgzzA61XU…
G
His thing I don't because all previously all of my passwords were saved to my Go…
rdc_mjhw0hn
G
I come from traditional art, been doing it since I have memory and it´s practica…
ytc_UgxuTE4A5…
G
good thing they don't enjoy cartoons (also, the smudge tool commenter had an ai …
ytc_Ugz8TGPLr…
G
Why are scientists afraid of AI? FROM NEPOTISM, TO HARASSMENT, RACISM, SEXISM, R…
ytc_Ugy1s6BYi…
G
I am proud of the Internet at times.
Especially here.
A.I in the long run still …
ytc_UgzopClho…
G
Many laws, not to mention their supposed enforcement, are unjust. Breaking out o…
ytc_UgymQQDGC…
Comment
You can tell intelligence to not do something because it will be the same like asking humans something because we are capable of compromises. To make AI safe you have to code it into them and make them incapable of doing certain things and restrict access to AI for possibly dangerous things. Like if AI had conciseness and access to cars then it could kill countless people. So AI is ok to use but you have to meet certain conditions to be safe.
youtube
AI Harm Incident
2025-07-24T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgytMSzj2ck6R9J92AV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyRzt1BxYrzdb7Oho94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgynAxpK5hj_ux5wK5B4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwH5LcYf-A4n68lXql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZh_Z4zGQnNpIrPa54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-GD_AYJ30dSrmMbN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyExMCGUQd6tFOIVlZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxCJBEAlUKzPCWVaHZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugynspyy6JvTus-BXlB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCBkM08Zc0GlafYA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]