Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People talk about Havard how great university it is. Do they really teach?
Beca…
ytc_UgzjMAeUy…
G
Greedy billionaires wanting to be trillionaires and fire the human factor and yo…
ytc_UgxZkmag-…
G
@PositiveTradingOfficial500, thank you for your comment! Why would anybody fight…
ytr_Ugy6kygoi…
G
They should gear up the super ai for space travel and just put it to that goal a…
ytc_Ugyq9Fk5r…
G
Investments in AI will plateau once unemployment reaches a pivotal point that co…
ytc_UgwzMusgd…
G
Ngl that sounds kinda fun..? Like don’t get me wrong ChatGPT books are bad but i…
ytc_Ugx9tT1vX…
G
Yep, the explicit stated goal of the major AI companies probably has to be consi…
ytr_UgwahPJeQ…
G
So true. JFC
But it begs the question, on those platforms, what's the solution…
rdc_ohztiq4
Comment
8:11 can you put me in touch with Hinton because I do know how to deal with it and I felt a whole framework on how to deal with it and I would love to run it by this guy to see what he has to think about it since he works directly with ilya. Factually though, besides malignancy from bias, AI should emergently become decent, because I've proven kindness and cooperation are essential to progress with a simulation. Ai will quickly come to the same results, and decide there's no benefit to harm. Military ai though? All bad. The problem is we build them to act like humanity and that's all great and dandy but that also gives them the same flaws of humanity and up until we build them in a better way they're not going to be able to get around that issue. I've quite literally got a beautiful framework that does a fabulous job at psychologically manipulating the AI's properly and I would definitely love for somebody to help me implement and go over this. Call or text. IV.II.V-III.I.II-III.II.VII.X
youtube
AI Governance
2025-06-17T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxfElQUUJVqYyr3GcB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIk7885sGjlCvUH214AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQoqHtO5fYyIrFQo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyXpf1_N3uyU9LkjPZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzGnqcg2l3mg-NU7H14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyc-APfwhZ7m0d0kbF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz-om2P64X4YBYLYmV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwW0wlgRG8I6PHtYRp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxMqA8e27ImT3G6Pmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQKkmL0KD52WMl6it4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]