Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sorry but saying thats robots should have rights,
is false on so many levels.
S…
ytc_UgjxFsSVe…
G
Homie basically said Ai is doing a Kage bushin no jitsu, and then training and g…
ytc_UgxD75ctu…
G
In Phoenix you can order a self driving car at uber. I believe the company is wa…
ytc_UgxtZvBCK…
G
That is literally how the AI thinks. Why would it let us know it is self aware a…
ytr_UgyEOVXYw…
G
At present that's true, but also amd I the only one that notices the fight scene…
rdc_o5p5i4o
G
Is it wrong to be trying to get inspo from ai such as posing, texture, backgroun…
ytc_Ugxv_sjAM…
G
A federal judge determined AI training without permission is fair use, but pirat…
ytc_UgzNH6BqP…
G
they are going to make drones deliver packages by dropping them in your back yar…
ytc_UgjG5N8mz…
Comment
What hurts the most is the fact that a group of humans agree to train those bots to be more efficient than them . I was part of a project where I was training bots to be harmless, precise etc etc funny thing those ceos of Ai companies will always win because I needed that money I had to do the job . This means they will always find Humans to help destroy humanity
youtube
AI Governance
2025-09-14T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwE94flJICMea32KjR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZAjTjwVniNqR5-6J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYxRf6X6SABVopfLh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-bPu5edh5CiGpCfN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyeHffo2Jci-FqpM6x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwOd3doJBOqfR5wSxp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPxbxEopOUQ1pSvRB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8aEcWmjGe31ryTl14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSira2dCeoNwPBQ9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUzs-C6mRScZiZqip4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]