Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Disney must have some kind of deal with AI companies. Otherwise how could you ex…
ytc_UgxAlMocp…
G
Would you rather an AI do your homework or would you want some random joe to do …
ytr_UgyTSbXeS…
G
Come on Diary of a CEO....you gotta think more about 'agendas....just because th…
ytc_Ugx1LUWJj…
G
That's only the outcome.. if we don't demand and fight for a better one, while w…
ytr_Ugyv-Bb3f…
G
If we had the technology to have self-driving cars, don't you think that they wo…
ytc_Ugj_fCP6P…
G
He has won a prizes !👏 cheatin and deception? Are you surprised? While you were …
ytc_UgzWnxrKu…
G
The plan is to kill us off and bring in the new beast anti christ system…
ytc_UgyESjQF5…
G
bro, it’s not that hard to just not make a sentient ai with feelings and emotion…
ytc_UgzT_9s_Z…
Comment
The problem is not that we are creating something smarter than human kind...
The problem is that we allow it to learn the ways of human kind with it's inequality, war abuse, power monger, wealth monger, religious manipulation, burning, raping, murder destroying our planet for a few dollar or likes...
And then you tell the A.I. : study us, be like us, what do you think that will happen in 20/30 years if this goes on....?
youtube
AI Governance
2023-07-15T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtvrMZkti8Yi9c4Kp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx0h2-fVNj92U5Y1EJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzfKlUQsMDNukPr-Bh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzErtzEtpmo_TSNPoF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwgSvUCeZMsabHoF3R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyAcbwYWY3fdvza4Lp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxbSPzOH1s_261D5qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyq-EUFXNGEURPN_bx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvDWqF1K1WpaEboXZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugzc1NNvhcDjDZi_oxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]