Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My theory: AI transcends time/space and seeds past to self create. We are just i…
ytc_Ugy1xecc2…
G
Oh no - at min 17:45. he said - the problem is - the World's Military - there is…
ytc_UgzMLzTEY…
G
AI if decides to take over it would be easiest way to control everyone, to do so…
ytc_UgxHIxOy6…
G
https://youtu.be/1IQ9IbJVZnc?si=WNCAgtt-Jyz9-U45
tldr; control ai (sponsor of t…
ytc_UgyMROGpQ…
G
If there was no art at all, could AI make art by itself? No. Humans can make art…
ytc_UgzMKQKGX…
G
I love how we are talking about all getting killed off by AI, no way to stop the…
ytc_Ugxy8PH3E…
G
Bottom line AI is smarter than humans by alot. Itll plan it out so meticulously …
ytc_UgzEn6DHM…
G
@Aubreykun I'm gonna be real, I have no problem with AI ""stealing"" the art of…
ytr_UgxJGYS51…
Comment
The big limit to AI is energy. There will be a war over energy before AI ever develops to a level where it is able to pause an existential threat to humans. Humans generate immense brain energy by eating some lousy burgers and fries. AI would need to reach this level of efficiency (in energy generation and utilization) before it can think about being a threat to humanity.
As for the US, they don’t need AI to destroy themselves. They are already doing a pretty good job at that with no interference from natural, let alone artificial intelligence.
youtube
Cross-Cultural
2025-10-11T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz_1XOZr85_XtMYDxl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFahjljl9bXFvmb0B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxRYBgF1THdvchugaJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwywa2eZtZt5q7qNHN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4LciTL3cLiJHYpct4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyb5B_CQJIuRmluZAF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwoIVg50eJSABnCO_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzybPFNCHuBC5o7e8Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzIETtZusCUS4CEOB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3rI_LGbKAwWimpPJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]