Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If they don't have them already infrared sensors are a must. The city has stray …
ytc_Ugzqq1ivi…
G
Business Insider is paid by the auto industry through advertising and in exchang…
ytc_UgzWhnwJo…
G
I really cant stand this guy, he constantly makes unscientific remarks like this…
ytc_UgyCufl9x…
G
I keep seeing comments saying use "I'm using Claude Code Opus 4.5, it works so w…
ytc_Ugxy_J6g7…
G
I too, lost my job to AI!
I was a freelance artist with a few years with a few…
ytc_Ugzy-NGCz…
G
Hoyo CEO has another company he owns thats AI support. Hoyo is the trap to lure …
ytc_Ugz_h_nwF…
G
also, I'm someone who supports ai art since it's just a tool. It's like if I use…
ytc_UgwV-2kd8…
G
I work on the computer side of this debate (I don't support training AI without …
ytc_Ugw3Ty8sG…
Comment
AI . I don't think that AI just has the potential to be dangerous or destructive, I think AI will definitely be dangerous and destructive particularly because it has been created by mankind who has danger and destruction in their makeup or spirit. You can try to edit that out but you can't. Just as our thoughts can't be regulated. If AI has it's own thoughts, it will have it's own bias. It might start out with it's creator's bias but will reject that for it's own bias which it may not consider a bias, or worse, may not care
youtube
AI Governance
2023-04-18T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyRktXepxWcbhUYrQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQTz9fLXAPUpjYu7Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfkOn5G8MfPau3gZ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyAsiGHE6asYD_YEoV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4MellDNC2qolkWSl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw9gWVsVaNO4f9BhjZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzEqTtZ6zZIivkef694AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyCrbs1vJuUZj83LIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwA5mdu2NzuU9eliUN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugw6w63LK8WneeuPA6Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]