Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All well and good, except, just a few months ago, you and everyone else said, th…
ytc_UgxgYAu2F…
G
I asked chatGPT to comment on this video and it said “Hey there, fellow Exurb1a …
ytc_UgxR89Sfq…
G
no it wont, the issue is if AI and robots do everything then who is the capitali…
ytc_Ugxwl-gqD…
G
Homeschooling has always been a thing since covid we need to normalize this righ…
ytr_Ugw3-n0rW…
G
Instead of hearing what “he thinks” about these problems, Id like to hear him an…
ytc_UgyN2E2MF…
G
The funy part about Musk "supporting" regulation is even if regulation was set u…
ytc_UgxN9iSmB…
G
What the fuck does a racist have to do with AI? I think you mean Shadversity, bu…
ytr_Ugy-mWxLH…
G
I blame the mother. If your son has enough time to build a relationship with a r…
ytc_UgwciFY2l…
Comment
AI can be incredibly helpful. Say you wanted to know the tax treatment of a complex issue but was limited to having to navigate the horrendous manuals the tax authorities put out or wait for hours on a "helpline" while being transfered from one agent to another. But what if they install an AI to which you could ask a question or series of questions in the same way, and it would give total clarification.
Elon on the other hand is looking at the way AI could be used by one actor against people to the disadvantage of the latter. But my confidence in a regulatory body safeguarding the public is not very high. You would probably end up with a revolving door of regulators getting well paid jobs on the board of "big AI", if the last few years is anything to go by.
youtube
AI Governance
2023-04-18T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzxAfcNv4QfaXCHNQV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwV7rZcm9_JAo9QkIF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzuiBSJWsOWLEHq3Eh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwqdXt2lot9p1pF5Tt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxM1wxil0iGmca4_zh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw1wRYqu3MTgXWMiUJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKBXVY_SruNdnsvp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzRvdBLE5u4NVFOtFB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwucyyc9pQVBA3Vexp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwloJ0NfyO3HYGLuBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]