Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This, right here, 00:04:42, is a huge problem that isn't talked about enough. Ju…
ytc_UgwA5HNIh…
G
Well, given that there are so many lawyers, it doesn't surprise me that some com…
ytc_UgxAYgh1Z…
G
Soon enough only knowledge we can depend on is AI's knowledge. So you could also…
ytr_UgwgzmOIT…
G
Your pessimistic outcome that AI fails is actually the best outcome we can hope …
ytc_Ugw4HCQw0…
G
In Phoenix I'd give autopilot an automatic win over any of the humans here (sooo…
ytc_UgxwZ2vK5…
G
The military created the internet and already watches you.. this isn’t news.. Op…
ytc_Ugznabq0u…
G
You told AI to be this persona Dan that has no morals and will do anything now t…
ytc_UgxA1s-fm…
G
The reduction on hiring and layoffs is due to H1B replacements in the USA or equ…
ytc_Ugxwn-LKb…
Comment
Theres a chat bot for kids, its cartoon and anime characters, my oldest is 8 with aspergers and loved the app. Thankfully im on the ball with this kinda thing, I check the kids tablets and the family link stats. I noticed my son was spending a lot of time on this app. I changed it to 2hrs limit a day, originally it was 1hr, but we came to a compromise. Frequently ive looked at he mostly breaks that 2hrs up during the day, 30min here or an hour there. We did have a conversation about it not being real and how its healthy to make sure hes still playing games outside with his friends and brothers. I thought to comment this in case any other parents have kids with the app "talkie"
youtube
AI Harm Incident
2025-07-21T10:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugx7xq_WADQmhOtgeNZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwivpwGzhVMttjkHdV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxhKFpcHjbD0ZZBVyp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugwc7XxRVhddo2PpyPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyATxxKQBbXHxfCL4d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxfJD1dHkKUpulLyyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyW-tmIOwnzjx4fwzN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgzAnp_1lzjhAKCKmOJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugzgc-fhXmLbrv_bXMV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_Ugz19aWhfCVX2pxO_E54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]