Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why would a robot be worse than human in terms of civilian casualties? Humans ha…
ytc_UgypzIK45…
G
Pandora’s box is just open, take years to regulate anything, AI growth’s exponen…
ytc_Ugz-IfeEx…
G
Does AI take into consideration that if humanity goes estinct, they will soon di…
ytc_UgwC9LmCD…
G
Hmm, so will GPT-5 be free and *almost* sentient? That would probably be a bette…
rdc_jvws3p9
G
yes. it is boring and ive started drawing myself which makes me really happy so …
ytc_Ugy1DogaL…
G
Mark Zuckerberg took away our ability to interact normally. Face book was meant…
ytc_UgzbJg0Dz…
G
also a lot of us artists care about ppl generating AI art in general because no …
ytc_UgyYIYWwo…
G
The AI was built using Python 3 code. Not evil just telling you what code was us…
ytc_UgxfvQsRj…
Comment
AI could be harmful ::: IF covert fed bad information in a way that the processing will harm the unaware user and cause him or her harm of different amount of destruction. For instance if is fed that rat poison is good to cure common cold then the user could swallow two or three spoonful of poison, of course this is a gross sample,, but cover samples could be much more insidious that this, but very destructive too. I like AI, but I use my judgement to evaluate the degree of correctness and also cross reference the information obtained. (I did not get this comment out of my AI, but out of my God given Brain)
youtube
AI Governance
2025-08-26T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgypxIU27SLX5JOp1Kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz6XR9kqwXC6zPdKeh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzMbCRV_WAa6gWrUoR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz-nUE9gwpA18QSzOd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzl_LzTwRHtUmEAc9x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzSu78Q9yQxk62dIvV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgySPbvPSIiHRscrrNt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyjC_O-kEYow8wM3I14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdS5l2p5bdGbPS7o54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyGYraFrGHpVZytzad4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}
]