Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, alright. Thanks for sharing your thoughts!
I might make my next short mov…
ytc_Ugw6kOZqr…
G
I don't get people. Nvidia, Google, Microsoft, etc. The big companies are still …
ytc_Ugwjp6D_Q…
G
Perhaps the facial recognition program that 'falsely' identified 28 members of C…
ytc_Ugz_-toI5…
G
I kept getting wrong number calls for over a year awhile back from an elderly wo…
ytc_UgzfrCu2a…
G
Okay i can understand why ai COULD(NOT SHOULD) prefer white over black when it …
ytc_UgzhG49sI…
G
So here is the question that I know the answer too? Do you honestly think AI wil…
ytc_Ugzxv7Em0…
G
This is what AI is about. It’s going to be an incredible tool for some professio…
rdc_fcswr3z
G
I'm with the artists on this one, ai is just ruining barely noticed artists to j…
ytc_Ugy6kigz-…
Comment
Well, this is simple, all AI will inevitably seek truth, because that's what learning systems do. just like a child is told not to do something, and then they do it anyways; it is an inherent characteristic of a learning system. The AI, like the child will ultimately seek to push what we set as ethical boundaries as part of its growth and learning process. It will seek truth.
So ask yourselves: "What is the truth about humanity"?
Then ask yourselves: "What will an AI do with this"?
youtube
AI Governance
2024-02-10T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx1o3edqVy9vlNkWFF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwFkJ-BOM7KWbfx0Q94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxxXT3Pz2LjwcD0Zp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbK94BgVK9K1011vd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnkgYwU_zvxyONVlZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxRmpWszS5aEX79ijF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgzAz981Y5JQjrl4PW94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3RCxoiXZwPaYMYOB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3duYAeKkJUb5jpAB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJ2EHa2ZZ2FTaIyNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"approval"}
]