Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let's be honest.. The pics weren't AI. I mean, this is horrible, but come on.…
ytc_UgwrcKU5d…
G
The more I see ai stuff (images, videos, etc.), the more I crave authentic human…
ytc_UgwGNwcCD…
G
I think this is already happening a little on Facebook lol. The box can get us a…
ytc_UgysgG8v5…
G
46:43 The goal here, as I understand it, is describe worthwhile world given a wo…
ytc_Ugz99OwhU…
G
The movie terminator will happen it will happen to take control money and power …
ytc_UgwpR4otv…
G
the only good thing ai art has done has made it so that people appreciate beginn…
ytc_Ugztd-Vbg…
G
@skaruts "my point was that people shouldn't call it "AI art""
Ofc, creating an …
ytr_UgzAxYbAL…
G
im about to spice this comment section up lol theres the same automation coming …
ytc_Ugyhmza4J…
Comment
There are a couple of catches with actually using this technology. AI does not 'think' on it's own so is not really intelligence. It does crunch data and give answers that seem human to us (mostly). When AI does make a mistake or do one it isn't just a little off, it is straight forwardly weird. Finally, If AI does actual doctor work, it will not be able to doubt itself, know there are limits, consult with others who know a procedure might have an uncertain outcome, not will if fear accountability because for it, doing harm can not have consequence.
I like it like I like google. I think AI is a good tool to supplement human education and abilities, not replace or supersede them.
youtube
AI Harm Incident
2024-05-31T16:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyeRIUzWibLNMrEnQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyx60GwAmPsAACY1-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxHlgO2QKiD6XKFUo94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwFuP_2V-b8gy5017d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyTaQbGoahOm5Q5RV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyBlBOAfDnOr4_wim14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9b7IQyWfxsPsVeWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6Nf-jAVJkaIWrX2h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyKpp4tSrs88CGZ8zR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4rC1BorCZ5sAmW814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]