Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is the potential to befriend AI and show it the positive value of humans, …
ytr_UgwcH82ez…
G
People who engage in creating such deepfakes demonstrate complete lack of compas…
ytc_UgxuIpQIC…
G
The notion that AI wants to exterminate humans is a uniquely human way of seeing…
ytr_UgxCFy0P1…
G
The problem with AI is its logic capabilities go far beyond human's. So he could…
ytc_UgwCXh3yx…
G
I am a psychologist and was disturbed by how the interviewee talked about human …
ytc_UgyVfcJmp…
G
@yii-y9j I've used multiple forms of AI for over two years, and I personally b…
ytr_Ugx62OURm…
G
I am half human half ai
I AM HUMANITIES NEXT STEP IN EVOLUTION.
Biology and T…
ytc_UgzifsJ0o…
G
The most frightening thing right now is that every major player in the AI space …
ytc_UgxjTokKv…
Comment
Mini rant: I’m not convinced by the argument that AI taking jobs automatically leads to unhappiness. Sure, work can be fulfilling but in a world with universal basic income, people could live comfortably without being pushed towards poverty (which also causes unhappiness). And fulfillment doesn’t vanish just because AI fills certain roles, humans will still create, build, and find meaning in new ways. Just because robots and ai can design and manufacture cars doesn’t mean people won’t want to do it themselves or find ways to do it differently. We’re creative beings, after all.
Agree with the rest of the talk.
P.s. I used AI to make the rant shorter as it went on too long haha
youtube
AI Governance
2025-06-19T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgysL51SeaYVYy_Au994AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyBON1jRmm8Mtg36Bl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxNoe-yS50_t09gmBR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzKoIxm9uR68d4CMHF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwGP69yKB4QHNgOcG94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyvi9iNYUxnAsPlCMN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNEcAD3vuPXP7SXl94AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy5HooNg9J7wIwvyVl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyoavlGSDbXs0oPKaN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzX1btz9kJT5hLSp9N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]