Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At this point, it seems like ai is sunk cost fallacy. "We've gotta keep developi…
ytc_Ugwdn_vGr…
G
An hour long video about AI and no one mentioned the environmental impact? Tbh I…
ytc_Ugzbj2Ojt…
G
China: "But what if we look into America's own human rights violations, includin…
rdc_irbp2ul
G
Ha, Ha Human sees AI's recomidation push the button, human says it must be right…
ytc_UgwNPcTp5…
G
there is a lot of debate over "what constitutes art" and usually the language ar…
ytc_UgyGEycgx…
G
Generating ai images is a bit more complicated than you think. It's easy to use …
ytc_UgyqdTmmD…
G
No, ChatGPT did not take over customer service jobs and it will not for a long t…
ytc_UgwAqWTX3…
G
It's too bad psychoanalysts aren't invited to these panels.
I have little hope…
ytc_UgwZTRrts…
Comment
@33:00 don't experiences require memories? If I bad mouth an AI for nuking my code for the 15th time today, the only "memory" it has of me doing that is a text history? That can vanish at any time? Don't experiences also require the limitations of our own singular perspectives? If not, then when an AI is trained, is it not acquiring all the experiences at one time? I am struggling to figure out why it would matter how we treat these systems at all. Ever. They aren't on the same experiential railroad with consequences and emotions and memories that we are.
Things change if I am communing with another entity in real time (as in they are learning from me in real time, not faking a conversation) and they have a perspective of their own that was developed over the journey of a lifetime (or a simulation of one)... but an all knowing omniscient computer? Why should I care? Why should IT care?
youtube
AI Moral Status
2025-10-30T19:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgynDYZb4IxHCUrEkpx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvjbRMju-2VfWSGHJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwmebdj1ebHMVxFsKl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy-fHE2_i-iW0toRId4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxWSFqtsyea6yw-cid4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwwKVRGgfPGlKFLHrF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzsFSPFip6DBiegTtd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLqf3SFs2mzZGTotl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtXkYrDzbMQL1Qo2t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2iXA6OZPCs29wmdB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]