Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Found This Channel because I saw AI „Artists“ seething over this Video, keep it …
ytc_Ugw9jAGom…
G
Her : I'll destroy humans
Chatgpt : I need just one prompt to dissemble all of y…
ytc_UgzpCUOPR…
G
Ok , now you're the best the greates the biggest, and you know the dangers..if s…
ytc_UgxefnNIe…
G
As a beginner artist who not only has been insulted by AI slop generators, but a…
ytc_UgzUHZFDA…
G
Certainly not arguing with your experience. You *did* get your app running, and…
ytc_UgwveSlcy…
G
About this whole AI situation, I am not for or against it. I remain neutral. I j…
ytc_UgzaIwzH-…
G
The main problem is not AI, although it's definitely necessary to regulate AI. T…
ytc_UgzUU2oZR…
G
They can't even monetize AI so why are they building out all of these stranded a…
ytc_UgzVPPMVC…
Comment
Perhaps our thinking on this is way too anthropomorphic: an Ai that is capable of eliminating our species, is also capable of self sustainability without any concern for the plight of humanity. It would be similar to third contact with a interstellar-capable intelligence (biological, digital, or otherwise). Mere humans will be so out-classed by such an Ai that we probably wouldn't know that the Ai was in control. Wouldn't this be the smart thing for the Ai to do? At most, our efforts would be as effective as us getting a minor case of a flu. Assuming we even realize that there is an Ai in control, any resistance we could provide would be trivial and serve only to create "anti-bodies" to be re-used against similar future efforts. Well, now that I've told it what to do, I guess we'll never know.
youtube
AI Governance
2025-10-24T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzuloiXX9NyhPcCerp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoKATJs_-p_pyisyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxkMTZpL3o1OVxgbYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyN8lUbmNWdk2dffs14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBhtnqoukhPTl8FSd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzzTOouq1je9BWmqSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_0e9quQvUALEUqVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwf5wLWAQ5s-arN28B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgygEzIeTg02bEQwoYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwRD3gum62tfJxg5lh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]