Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ironically AI takeover is probably what will end capitalism and start a new comm…
ytc_UgyvWl_LE…
G
We are well on our way to destroying ourselves. Ai can and may speed it up…
rdc_kqt54xe
G
🦾If AI just tells you what you want to hear, it risks being a “rug of false secu…
ytc_UgzfiX-Vt…
G
Again, America done this before for many years!!!! It is an excuse or whatever b…
ytc_UgwMozUkA…
G
Giving the robot a machine gun and the guy remains close To the Robot.Brave guy…
ytc_UgxojB6Dd…
G
What $37,000 basic income? That is a life of poverty. You need at least $200,000…
ytc_UgwX3zsKa…
G
There are three distinct possibilities I think.
1) AI becomes self aware, decid…
rdc_nxpl3fs
G
human time is more valueable than shite 9-5s. Passive income via ai agents is th…
ytc_Ugz76bkv5…
Comment
Nonsense fearmongering. LLMs are a developmental dead-end for AI and have literally no chance of achieving AGI due to limitations inherent to their design (as more and more companies are slowly coming to realize). The only threat they pose is the economic regression that is going to hit the United States when the AI bubble bursts and the market realizes that all the trillions of dollars spent on AI were a waste
youtube
AI Governance
2026-03-18T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxkobrMdzH_nFFZq7J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyId2IcFVeU3vNZ-AF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugymr067cCwsQnZVCX94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5nrREmkgrRZRa0Jl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxyq65aacowZkeHvEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsdgEaY71UZZvGY0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwjczmmH3OArohnjN94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9qxy7J_AthGWN52t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgznT4yTlof-ofZpXBF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3XIA302-FLGn3l354AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]