Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've read that people are starting to realize that offloading cognitive function…
ytc_UgyJh1oHx…
G
Thank you for sharing your perspective. If you're interested in exploring AI fur…
ytr_UgwcVtBDF…
G
As an amateur digital artist, I am really concerned by the fact everyone is talk…
ytc_Ugx3MPPdW…
G
If we don’t create immunity naturally through genuine infection, how is a vaccin…
rdc_g9thjmx
G
If musk admit that AI is very dangerous they why he is using AI everywhere ??…
ytc_UgyS7tJzj…
G
or maybe it's because they're trying to sell an AI product of some kind
whene…
rdc_mowmhv8
G
Interesting how we're going though the BLM and rights for women. Humans will alw…
ytc_UgzGyNw_Q…
G
With all due respect, you don’t know what I was doing when I was learning JavaSc…
ytr_UgwGewz1D…
Comment
Do you not know what Ai is or at least how they currently work?? They are fancy computer programs that are designed to react to certain prompts or do something at certain times. They don't have a consciousness they don't have free will to say anything that they choose and they can't go against what they aren't allowed to say. The robots that they are making while impressive they can't exactly measure up to humans and if one approached you, you could just easily push it over. While I hate Ai because it can replace peoples jobs and does terrible things to the environment we aren't living in the terminator movies and Ai still has mountains of obstacles to get over before it reaches that level and people aren't that dumb as to create and allow the purchase of something that would kill people. At least to not any civilian.
youtube
AI Harm Incident
2025-08-25T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw49cVqOfT6ajGEGqh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgytLuZ0fOEvJkmFosF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzsTRcrUjNp6M0a3Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwpnXodS0Gj6ItQ0Ct4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxodhSWm1oxA4AgeIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7kWm0aEWBCDK2vTJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw0rNz6NeN2fBmT20p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0VesQ_qDJmEl-fm94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-0cJqCxdUKNDa4I94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlatGFFA252h7PGyd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]