Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The datasets they trained those ai on probably had a lot more white people in it…
ytc_Ugy5NgH05…
G
you asked ChatGPT "do you "think" that if somebody says something they know to b…
ytc_UgxPBGUBY…
G
The odd moment my psychology degree is worth more than a Computer Science degree…
ytc_UgwZ9lpRH…
G
"Can't trust the media/government" anxiety just got to a whole new level with AI…
ytc_UgyWSqatP…
G
If an industry can be disrupted by AI, then it needs to be disrupted by AI. Indu…
ytc_UgxMg6Pol…
G
Existential dread. Human exceptionalism. Biological instinct to survive. If our …
ytc_UgxYZBKZZ…
G
All my content is AI music. I don't create personas or pretend it's not though. …
ytc_UgyGvARtj…
G
Le danger de l'intelligence artificielle est immediat sans aucune régularisation…
ytc_UgwS4W5AM…
Comment
Oh, so AI is no smarter than a monkey. How long would you think it would take for AI to become self-aware, much less than 100yrs. It's scary to think that someone who is supposed to be so smart doesn't realize how fast it takes. AI is supposed to learn at a faster rate than humans, and how much does it have to learn to become self-aware, and how do you know if you would realize it has or how long after.
youtube
AI Responsibility
2023-07-10T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQs84exYvzFlYlRg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgylUClyleDbs4yyFkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqgoGD0gzb46y1Cyl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyFiQfJDjULXp9XwYl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKNsn10iJeSNUnEbV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxZyWYU1q526gZOebN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxkX1Mkk33-WjFqPwR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwbnKg6KGPbJrYdl1Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwORZAT7jevdKeFM_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy-PgUL7EnKdLy78_p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]