Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People with dark skin are being arrested because facial recognition technology f…
rdc_h55501m
G
It is not about species humans vs. AI, it ist about greedy humans using AI to ma…
ytc_UgycVKEEA…
G
We are literally building AI robots that know everything about being a human. It…
ytc_Ugzy-xS6T…
G
EN MUCHOS GOBIERNOS DEL MUNDO...YA HAY DE ESTOS... REEMPLAZANDO A HUMANOS... DE …
ytc_Ugxv4ynE8…
G
Yeah I already accepted that AI is here to stay. But the fact that it looks like…
ytc_UgyHX9nyo…
G
I know it was a sarcastic post but I'm just saying it's not true because it woul…
rdc_o8qh1em
G
I just did this with chatgpt and when I asked if humans are being watched. It sa…
ytc_UgzY9005c…
G
Tell him to use AI just at the beginning, when he gets more budget of time he ca…
ytr_Ugwl4l08E…
Comment
I'm always curious whenever discussion (be it in science fiction or real world theoretical discussion) is about artificial intelligence "simulating emotion". Sure, we may create programming for emotions, but what if, like all things with AI, that gets changed and expanded over time? At what point is it no longer simulation? What is the definitive factor that you can point to with human beings that we are actually 'experiencing' emotions? What if we are merely simulating them as well? They feel pretty real to me, but who's to say the same won't be true of AI describing their emotions?
youtube
AI Governance
2025-07-28T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwuV01xuwf1RiHnIGF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2PoOfxjDECrG1S6N4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy_TKKGC62b-6UPAWl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzdgGKZD--WckxkkUd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugzhsie7S2bDP-n3BJR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxSj9qa8_s1mTRZR454AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZpa_J1HCNondQ1jl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyDpKeEoBcu2Zr6bIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyD2jXqoI4QRb7SrW94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyAzzfPs-dCyJywIER4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]