Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You cannot control it, and it is already too late. AGI is not a big leap, AGI is…
ytc_UgxxUyE3J…
G
Right now all the LLMs do is synthesizer information. If that's all your degree …
ytc_Ugwf9e8Ri…
G
felt the same to me as the ai saying "i feel my heart aching because of this dec…
ytr_UgwcIrKrW…
G
If you think about it, they can’t exist without real artists. You can hardly cal…
ytc_Ugzr-_V1v…
G
There are a lot of non-sequitors and inconsistencies in this persons reasoning.
…
ytc_Ugzka9Rms…
G
Before we ask day something that we will regret...
ALL HAIL OUR ROBOT OVERLORDS!…
ytc_UgwHe-h5J…
G
Not a promo lol but Imagine bo has been insane for me. I made a full landing pag…
ytc_UgysKsRzd…
G
this is BS when they say waymo is better drivers than humans, that is BS, i see …
ytc_UgzEsvfRQ…
Comment
i dont see an intelligence that is predicted to harm or dispose of humans as More Intelligent. The AIs so far are trained in documents written by mostly Humanists, the anti-human authors didnt make it to the press often. Even if some like Mein Kampf are in the training data there's 1000 authors denouncing that book. So an Ai left to its own devices based on current sets of training documents wont go evil. But the unforeseen is unforeseen. Now the same DNN trained by evil writings will be evil, but that's the human choices, and the "good AIs" may be the only ones capable to defend us against the bad ones, so dumbing down is possibly not a good idea. I noticed that AI "experts" are regularly un-promising that the super AGI is near and will be able to think, and invent. This makes me think there are forces like the Pentagon who want to freeze AI developments in case China for example could use them to use them to gain supremacy in cyberwars or to strategize old school wars or become the top industrial force, invent better/cheaper Domestic Robots for ex.
youtube
AI Governance
2025-06-23T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzKoLv-PzAm-LhV8ap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxp_U1q07iztPHcr6p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWH4ietbUL3-tPdr94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_HSTyv6MB8755cot4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-NJ61zfFBcEpRWhV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCDEWaCDp0nwGnJHd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxzl-hiOiJlUG7zbk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzkNhlU9uhlBJ95xd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyngWy6jd1UnwCXstx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzuPrOorFSI5DwYgRZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]