Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ It's more of showing who doesn't consent to their art being used to train.
If…
ytr_Ugx0rgxBx…
G
No… you’ll see improvement gradually overtime if you work consistently, that’s h…
ytr_Ugz45LDKD…
G
Mississippi went back to a classical education, using programs like hooked on ph…
ytr_Ugw46MuLX…
G
So the issue of society being racist hasn't changed, but AI is bringing it to th…
ytc_UgzOAxTig…
G
Deepseek is the number one contender for an agentic model for people who are usi…
rdc_m9gg0oq
G
A reflection at 3:20 or thereabouts. I think this is to be expected. Humans se…
ytc_Ugz27O5Bc…
G
Ai is going will make super rich even greater controller of resources. I have …
ytc_Ugw4Beu2-…
G
This is so sad. I completely understand where you and others are coming from. Wo…
ytc_UgyhxpCCI…
Comment
If, as Dr. Hinton says, 'super intelligent' AI will (or does) blur the distinction between human and computer, then it would stand to reason that AI will be as good (or better) as humans in predicting its own future state(s) or condition(s). Scientists, authors, and moviemakers have all done a remarkable job of predicting future states of the human condition in the last century or so. Thus, I have a simple question that should be queried of the most advanced AI machine. Does AI predict itself becoming dangerous/lethal to humankind - as it develops its own 'superiority complex''? If so, then also query AI as to what programming steps should humankind take to mitigate this future danger? In other words, why not use this same "super intelligence" to insure humankind's self-preservation?
youtube
AI Governance
2025-08-21T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgysCgRNXesigVtlrVl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxBcFErBwOCDScABJJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwbs_x4k-JOi6eiXzF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugy_taybzVe7HBqqhdp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzjd-azVGMmOVjw-id4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzfc_sM6kKnBYeLhu94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxzAq_mm1XN3_Rs9Dd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6Od6z9ztkeBfSezJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy5RjUMKusPeyjG0E54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwf4yILCr2UOaBJ1BN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}
]