Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think it has to be a here or there thing! AI is frankly here to stay but…
ytr_Ugy-0tD2f…
G
@MichaelMasonUWA I looked at your digital citizenship playlist, and your videos …
ytc_UgxL7BVrm…
G
Quoting my ChatGTP:
No, I’m not sentient. I don’t have thoughts, feelings, co…
ytc_UgxFERaOd…
G
Personally I think its fine as long as it has a clear watermark that says it is …
ytc_UgyD1DfXK…
G
I work at a medium- small manufacturing plant. While they are trying to upgrade …
ytc_UgyUUnPrP…
G
As Elon Musk once said "suppose we're building a road and there's an anthill in …
ytc_Ugy-IL9ND…
G
I see this guy has a lot of knowledge about AI but almost none about humans.…
ytc_UgxBHVyjj…
G
ai is for few users,basic income is for all
ai in industry = no people in indu…
ytc_Ugw0FV92t…
Comment
8:27 DO NOT ENCOURAGE USING AI FOR THERAPY, it has been shown time and time again that it can encourage delusions and even encourage self deletion and harm, also exacerbate loneliness because ai chat bots are made to try and not argue with you, they are too agreeable and normal people wont agree all the time with you. Ai mirror you to trick you, so if you say you need to delete yourself, they are likely to encourage it, they dont have the emotional intelligence to talk someone off a ledge, nor to differentiate facts from fiction when you are having psychosis or delusions.
youtube
AI Harm Incident
2025-07-23T23:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxoGX6L1cJXNPKwtOd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyKmWQKGvmIpSbx7vt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy0IEkjH7s8Dx-FiQl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5jiRpr3qkTmhwU7B4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKghNhEuIiL782Jjd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz7eHG3D_acLwhLTCN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLXJqn2Jt_uzXsIGF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwYJVedaRHHCEGjbB14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"sadness"},
{"id":"ytc_UgxW30jycqMTjOS8AFp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzOePfIjjnliicJo0Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]