Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI does not generate anything new. AI only knows what humans have done. This sce…
ytc_UgzjsYksV…
G
Sound like the Antichrist could very well use A.I. to deceive and control the ma…
ytc_UgyFmsdr0…
G
Are society is being becoming like Star Trek money is no longer the driving forc…
ytc_UgzoZSF6I…
G
The world will not exist if it is not moth balled by the world or there will be …
ytc_Ugzy9EV3n…
G
I wonder how ai bros would feel if people started stealing their code in the sam…
ytc_Ugy1vAnWl…
G
Why not simply use the fact that the AI was very obviously the product of design…
ytc_UgxjV_J8H…
G
Same stupid arguments as for guns. The only thing that stops a bad guy with a gu…
ytc_UgySsjHzA…
G
Interesting. I have very little understanding of AI. My only concern is that peo…
ytc_Ugxc1DGg4…
Comment
1:11:55 Hello. HELLO!!!! AI IS NOW and since the beginning of the Gaza war being used to target civilians on purpose and maybe by mistake. If it is AI mistakes then the entire Gaza Strip was a AI mistake. Look the tech-bros of Palantir AI are culpable for the 170 school girls annihilated by Tomahawk missiles, TWO missiles in a "double tap". Again, that strike was orchestrated in part by Palantir AI and it's "Kill Chain". To quote Google Gemini AI... "The AI Component: Reports indicate the US military used the Maven Smart System, which was built by Palantir Technologies, to identify, rank, and target locations in Iran.". And right now the investigation is trying to determine if the so-called "mistake" was AI or Human.
youtube
AI Governance
2026-03-26T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwvdKWnPjUV9f-tZVt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLc2II-NW1KhMrKkp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHjwe987gHpF5nQw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfyEdq_Mgc7Q7gLYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyXnOWFAraynJwSlV54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy83ApDK18PwGIOdmx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwlp0QaZilJG2rE7PF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwD0HmezmjFqXNF2F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzoia8QRE4uMWwesCF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy8AP0DU5HWPwxhdgJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]