Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder if the moms are the ones who get deep faked they will stop the whole "m…
ytc_Ugxlzkb8c…
G
If we make general ai euphemism for super ai
Our goal is it to do most of huma…
ytc_Ugx9zs2T7…
G
On the bright side, it will be impossible for the ultra wealthy to “own” AGI. Ev…
ytc_UgzlqNye0…
G
@skad2058Where do you think the Ai will learn to explain the meaning of a piece…
ytr_UgxgRbo2P…
G
I can’t get AI to correctly tell me if new MacBooks have an OLED screen. How is …
ytc_Ugzd-3u-J…
G
@xjood805 You don't know who he is? He's the CEO of ChatGPT. In a recent intervi…
ytr_UgzTQhXiJ…
G
Oh F OFF! Blaming it on AI algorithms as if we didn’t just see what the US and I…
ytc_UgzzZd91F…
G
The invisible biological weapon is most concerning.
In regard to the human spe…
ytc_UgyAo4gWp…
Comment
artificial intelligence isn't dangerous in and of itself, it might become dangerous if placed in something like a tank but inside of a handheld device the worst that can happen is that your phone stops working. People are kinda similar. Your brain can't directly hurt anything, it requires a tool to do it's tasks (your body) and using that tool it can then perform either acts likened to a saint, or take the life of another being. In the end, it is all about the way that intelligence interacts with the world and beings around it
youtube
AI Governance
2025-06-21T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxHVyeheBhcnEMSjEx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxK8dt5g5CtG9tusMF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyi6DO9Ca6WpfJkq1x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyoVn277vDIxMMZxi54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy64kS0BCVecSoiMJN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx13Xgi1vSpytmU8BJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDi_trf9sYJX-rA914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJnuF-SYAb0ejmCml4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQZOtXqDMbp-czDtp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmFOU8zkOqu3MilRR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]