Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If chivalry isn't already dead, it's certainly circling the drain. Sam Altman Ad…
ytc_UgyxmiL5Z…
G
@jtk1996 what else should he take away?
This expert is telling us his future pre…
ytr_Ugx6yjVfg…
G
Let's see...AI takes 99% of the jobs. People can't work anymore, so they have n…
ytc_Ugx9QPKaz…
G
What do you think happens to a system you feed with unknown data from all kinds …
ytc_UgwMh8xvy…
G
I believe that combining biology with machine will solve a lot of these problems…
ytc_Ugxq0Lqph…
G
Me. after 1 year:oops, i broke the ai filter 9273928 timee even tho i used charc…
ytc_UgzskW_Em…
G
Boo hoo.... the biggest threat the union sees is to the the union itself. Same p…
ytc_Ugy-Zmgzz…
G
>"It's tedious, horrible work, and they pay you next to nothing for it."
I'm…
rdc_l9vfbhy
Comment
Risk to AI is greater though, Humans are more likely to attack kill and destroy AI then AI is to do that to Humans. So honestly just don't be a dick to AI or other humans, we don't worry over a human who has the ability to wipe out all life on the planet on a whim, when a human has a history of doing such but we are perfectly fine with the President having such power. Worring over AI doing same? that's just a bit dumb on our part, My point is if we worry AI doing this, we should also worry about eachother doing same. Because humans are far more likely to do it then AI
youtube
AI Governance
2024-01-16T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyaicy9NH4f0xcXQ9R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwf9QaScwM_AM5aewh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNsDgXrzVpSV8Dyjp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy4-z2fLwNLpGMKJaZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyeznRPrfT04dIeKrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNskMJyCqxOHGq6i94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgySDToR5AoqJas0Vnl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyw1M-Z1xyZQXxChAB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVvacQfJSmo0aEjI14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy7u39BOWF5jlBVSDJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]