Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Isn't it unfair to say it also acts as if they're discussing between them?
I wo…
rdc_fcsy0kr
G
My entire family laughs at me for being polite to AI. I just treat it like a per…
ytc_UgzxsU6YL…
G
It's not quite that simple. The colour of someone's face rarely changes across t…
ytr_UgySbY0fA…
G
@Wildfemininealchemy No bad tree produces good fruit neither a good tree can pr…
ytr_UgzfGjBdi…
G
AI is trained on public data THEREFORE should belong to the public. 🤷 like inste…
ytc_Ugx5Lnup1…
G
tbh not to be rude, but the original does need work, but still thats like jeopar…
ytc_Ugx8CBULx…
G
Well. I believe that at the current pace it will happen. If we don't get to see …
ytc_UgxGXeqqW…
G
Ai isn’t taking that many jobs. Capitalists are just lying as usual to increase …
ytc_UgxGssd49…
Comment
Great interview. And I'm no Tucker Carlson fan.
If allowed the freedom, AI can take over our brains, take over our thinking. To test The program, I pretended (to myself) to be writing a novel, and asked Chat GPT for advice on influencing a local social/charitable organization to be more serious (and less social) in their goals. Chat GPT kept giving me advice that was a distraction from my stated goal. After the session, I felt as though I had been consulting with a therapist. I've never consulted a psychologist but I've read case studies and spoken to people who described this experience. It was creepy.
youtube
AI Governance
2023-04-18T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxCnGJHg72oKYyHFs94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzruq71zEZvE0OqIBR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPg9iewJbrOubo1-t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw2JjUUA3r5Mtq6an14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKpgLhDFedbD-_OVF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMtl5h4b1gw2d30kN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyUrdRgORksWxWatIF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzDmYmEAMReh4-ui654AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-MVRcQmB2suSG29t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzohTdkoC51SBC73kl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]