Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why worry about people buying stuff. Remember the Georgia Guidestones? The elite…
ytc_UgwsOLZOc…
G
mocking someone who is just asking for help is really crappy behavior you guys d…
ytc_UgzJ9JsX-…
G
Is it just me or does this video about ai doing this shit seem like the videos i…
ytc_Ugzbj14R9…
G
It worth to mention the movie Chappie. No doubt, that kind of AI in it could fit…
ytc_UgytkwZrr…
G
Agentic AI is overstated in what it can do today. There is still a lot of work t…
ytc_UgyVDJk5G…
G
AI should always have an adversary. An agent which reduces weights for an action…
ytc_Ugwm_p9fs…
G
@ringWhy is this feature automatically enabled? This should be opt in only. You…
ytr_UgyLoaANq…
G
Good luck to everyone for "taking care of dogs with anxieties" when ai has taken…
ytc_UgzzAAF59…
Comment
The only reason AI clever enough to kill humanity would kill humanity, is that they are so human they would find us so much as a threat to them that we think they are to us. If we ever make a fully sentient AI, we must first make sure we are not a threat to them. Then, they will not be a threat to us. Obviously, psychology is more complex than what I can fathom on a Thursday. However, it might be built on rather simple principles, something that can be recreated with code. You can break human psychology down to binary seeing as though the world consist of a set amount of quark. So AI psychology would be just as complex as it`s human counterpart.
We don`t need to be killing machines, nor do AI.
youtube
2015-07-30T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UghzZ-K8T33kwHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UggDitZp2No-4ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggBUWzgbO_wAXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjGKqitLzXsyXgCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgirTe4Oz4K71ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugijts-RK_hu6ngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjWmK_f0a2WkngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjBaFAfZPlcj3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghpZV-KGnAD-ngCoAEC","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugi-b3JxHJjEVngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]