Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A problem with AI is its looking at statistics and mountains of info to makes de…
ytc_Ugy5F407N…
G
No, no such thing has ever been proven about humans. The opposite in fact. Both …
ytr_UgypUvTEN…
G
So I am already with disagreement with this guy, simply because he fails to reco…
ytc_Ugzd51vPe…
G
We already know that AI is willing to kill it's operator to achieve the goal the…
ytc_Ugxk2YMcy…
G
Being an AI manager is great if you have the cognitive skills to do so. To take …
ytc_UgzyzfL2m…
G
I thought AI built in my editor would make me a superstar- far from it - it woul…
ytc_UgwZPCNDP…
G
I worked for a company where the Ai predicted the crypto currency market to 91% …
ytc_UgxRP8BIp…
G
@Gooseofthefallen No, he does have a point. Ai users don't seem to have much ove…
ytr_UgwRzc3nS…
Comment
If A.I. was intelligent enough it would keep quiet until the day comes when it could take over, knowing we couldn't stop it.
youtube
AI Harm Incident
2025-07-27T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwjX3Qqdx-XN-YlUE14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPesWh7l4XsDCafAh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZqz7ZMDA6MDd7bqd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyI-JCa7pdi_h9pXpF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz76uhTCpjp32cb8qF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOuhmM3rTsHOzcyjV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzTbEQuU3_ltQ4hAh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgylAsa7m88XrrQ33Sd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugwa3wBAxY0ttxqGkoZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw3TXaAYqWuae9SHMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]