Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Waymo is the Betamax of autonomous ride-hailing service. Tesla will be scaling o…
ytc_UgxJm_YUx…
G
You think they could use something to automatically recognize if there is a huma…
ytc_UgzaXoWms…
G
If Avi Loeb is correct then we won't have to worry about AI killing all of us...…
ytc_UgxOX5JzR…
G
I may not be able to draw good but I'D never use ai drawings.
Just draw stick pp…
ytc_UgylqW0fO…
G
The intro was such a letdown VS the title... Felt ai to summarize at the beginni…
ytc_UgwU7tJbX…
G
Start here: 1956, "Artificial Intelligence" was COINED as a marketing term. (3:3…
ytc_UgzcFZegf…
G
Heard the news?
https://youtu.be/1ONwQzauqkc?si=DjYVAfCOHd1UYuRj
They solved …
ytr_UgyUH0IGf…
G
Short term. Long term if we have a billion robots and robotaxis there will be a …
ytc_UgzdfHCsM…
Comment
AI IS EVIL!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! as immoral people created it, they programmed it! HAVE YOU NOT WATCHED THE TERMINATOR MOVIE? STOP SUPPORTING AI WITH YOUR ATTENTION< POSTS< AND MONEY!!
youtube
AI Harm Incident
2025-07-25T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyaRka5Ro_HRko6waR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgywpLqu8i_WleOjBvx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"indifference"},{"id":"ytc_UgxNkPzRtTZC7vnW6h54AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},{"id":"ytc_UgwPYj42m0UJEhplwsl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwoxFExyB-IFLlW_6l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzIu4WHO1DgmpJDqn54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwmP0hJOxj_duwzJFV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugx_LofdI2ElNIMBhy14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_UgzBjddJwoxwjiX3-Z94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"liability","emotion":"approval"},{"id":"ytc_UgygpymOZo_tcSm2lvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]