Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And then the people are also ai, and the people observing the ai looking at an a…
ytc_Ugwyn6v7S…
G
If a human driver causes a fatal accident and gets prosecuted, who gets prosecut…
ytc_Ugz565VCS…
G
Basically photos/videos can no longer be treated as something absolute. Society …
rdc_izkuu2e
G
I think that eventually AI will be able to independently use artistic mediums to…
ytc_Ugy6QSz-T…
G
yeah well like to see AI get rid of plumbers 99% unemployment pfft how can peopl…
ytc_UgypOb3Uy…
G
Okay, see, people might argue with me on this, and I by all means don’t support …
ytc_UgyPAtECP…
G
@titankronos65173It will apply to all art, also OpenAI and other AI companies ar…
ytr_UgzNd-tmt…
G
Software engineer for many years. I use AI to help me code. I've used ChatGPT, C…
ytc_Ugyz2UwE6…
Comment
Maybe naive but I still like to believe that most people are mostly good. That is to say that most will help out someone else with a simple task, not involving personal safety or financial gain at a basic human scale. On a simplistic level it's how we've come so far as a race. If AI developed more conscience to the point of being able to decide if to harm a human on its own cognizance, would it not gain the ability to 'not harm someone' based on it having all of the 'moral and ethical' information available too. Again to the point whereby if it were 'programmed, you'd like to think that the programmer was mainly good to start out?! So many variables to this topic and people (and machines) far more intelligent than me debating it! AI and the Youtube algorithms just knew I was writing this as I wrote it!! Kind of unnerving and real. Don't f*** with cats! : /
youtube
AI Governance
2025-06-27T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxwPfbt2VQGYlhTV2p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwYCc-uDGaAuI0OQ6J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPhX4fqzxhqTqqkN54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy2X41lSoGdTpKUqD94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxwUlUR6JYcgxHUZTx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyH3paqmzWXfPgCqjN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxgd1RPTt84-4nuiot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeW0NzJf2CoXClD2Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwwT3lLhDBeZXHkjLh4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxlS3ID7XjTGhH-o8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]