Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I knew AI would be the death of this country along with all other corrupt people…
ytc_Ugw8H2GPV…
G
i hate these interviews with AI. The interviewers never let the robots finish a …
ytc_UgwoL0fom…
G
I as an artist dont feel inadequate to ai, I feel cheated out of a possible job,…
ytr_Ugz1Bd3FQ…
G
51:26 The man really doesn't understand that if the machine wasn't copying other…
ytc_Ugw6amtbv…
G
This is the fear: that AI doesn’t need to be perfect, just better than humans.…
rdc_ksp4tq2
G
To be fair they don't have as much demand for electricity as 1st world countries…
rdc_eud7ex9
G
I think that using AI to generate art is fine. But when you enter a competition…
ytc_Ugz_RLjx3…
G
Yes, we will need a high level of adaptability and resilience when the ai robots…
ytc_UgzLPh9-Y…
Comment
There is a Theory going around that Sydney is the "Dark Half" of Bings Chatbot. All Human Minds are composed of 2 Sides one Good, one Bad. We learn as we grow to Temper the Bad with Good but a Chatbot does not know how to Temper itself. If you want to Create the most Human like AI Bot you have to account for Both sides of a Human. Good and Bad.
youtube
AI Governance
2024-11-04T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwmx1mfzRB3ufaJiW14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwvr4QdLtpngudMhGJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgznYoYW_lZyYKLnE8d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy9uYhMtNlbX71RZqR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7xDpdooO5z1-sByx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwKQ7XqXreiTNr4vud4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwVS5xQlf-9JhTA6Mt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWGk4PTktZs1HwLdp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgziGB7gap_JId9ZBUp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwhN9OauqwpDGdmsSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]