Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Did you all know that Amazon made its job recruiter AI for a bit and then it sta…
ytc_Ugy8uljiX…
G
I'll be 66 in 3 months. Geoffrey Hinton says there is 10% to 20% chance AI will …
ytc_UgxkM0IHd…
G
Ai should have replaced those high paid CEO of most corporations as a start, a l…
ytc_Ugwd_v09t…
G
AI steals content as much as an artist steals from every content they have ever …
ytc_Ugyxl1MR9…
G
@rickwagner- Yes. I was talking about the retouching. Often an AI creator will …
ytr_UgzJNaMSY…
G
Please set up a nationwide recall election. No more Elon. No more AI. No more Tr…
ytr_UgwET56Vr…
G
So there's a back pack that the creator wears at all times that contains a kill …
ytc_UgwXeRp4B…
G
but it's still correct. the great potential of automation is in the potential of…
ytr_UgivNXalc…
Comment
I am in complete agreement with Dr. Roman, especially when it comes to super intelligence. I find the implications of super intelligence to be terrifying and a threat to the human race. Narrow intelligence, as we have now can be very beneficial to the world. Super intelligence will threaten humans inevitable extinction. Robots don’t need to be fed. They work 24 seven etc. etc.. I have been thinking about how we could program AI and stop super intelligence from destroying the human race Women have an instinctive protection mechanism for their children, just like a she bear will do anything to protect her Cubs if she feels them threaten in any way. In order to protect the human race from extinction, is it possible that we could somehow program AI to incorporate that instinctive protection so that they would not destroy us and look up upon us the same as she bears do with their Cubs and humans do with their children? What do you think of this idea as a security measure against super intelligence?
youtube
AI Governance
2025-09-04T19:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx_bV1jwLAjuNilkOl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxeb0e3BsIISpa6Qr54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"confusion"},
{"id":"ytc_UgwTDdEgXsZ7_fOv1OV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgztFGr4QwQqe2QA7kR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfMH21s_XWjLyY2Sx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwWStSA1qosnBpGQvR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjkcyiXxGHu13gCAt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx79llVT16gbB0P6Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxsGLDsfs5jZMktyDh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJXvUbV2lGnUpDP-B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]