Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I also think that he would have exposed more than just copyright infringements. …
ytr_UgwDltA0m…
G
I thinkthere are multiple views here, AI replacing certain occupations leaves ro…
ytc_Ugx2wUzUT…
G
Blows my mind wondering what will be here 100, 200, 500 years down the line. M…
ytc_Ugz3bt_86…
G
And people think AI is taking over 💀 relax Trumpies they can’t even manage to fi…
ytc_UgyVeYkMY…
G
The whole economy model is garbage! Based on infinite consumption not sustainabl…
ytc_UgxGT7XBK…
G
I agree we are in a horrifying point in time where AI is not fiction anymore @S…
ytr_UgyRCwPB2…
G
One thing you didnt touch on is that AI could very well be the conduit that allo…
ytc_Ugwgt_xnn…
G
I wish that were possible, Doblin. Unfortunately, it's not possible to place mo…
rdc_cfkuc23
Comment
A compelling way to understand the risks of AI is to compare it with a psychopath. Both can learn to mimic emotions like fear and happiness, yet neither truly experiences them. Just like a psychopath, an AI doesn’t possess an inherent sense of purpose or morality—it simply reflects what it has been trained on. In both cases, whether their actions turn out to be beneficial or harmful largely depends on the environment in which they are shaped or educated.
Hope this ideas can helpful to describe in some cases
youtube
AI Governance
2025-07-16T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyJOXC5g1LVuOQsiUR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3gI8F39LeldZjroJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUv8lI5B47AuM5Yal4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzP5z_0U24sQcgKkHx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3BKAnn2xgZbvlgo94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxxYiaNUfgG2jn-Fvh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzAaQGmcqpgRWOxOqd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwamhEfh1xeRvo5FFp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw_iNkGjxQ9hxbQnLB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyq-UwtJVQ1Bfoyik94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"frustration"}]