Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For me AI produced product is like kitchen knife. Before the Industrial Revoluti…
ytc_UgzO74fuL…
G
when a robot starts saying things like this, its time to smash it up & chuck it…
ytc_UgxtIS7f0…
G
Google could create a bias army that has everybody's information. There are over…
ytc_Ugyof2PFF…
G
AI teaching people to give proper context and communicate in complete sentences …
rdc_n0mu9sk
G
why does it sound like AI is being set up as a scapegoat for a future catastroph…
ytc_UgwB2SKte…
G
AI is moving faster than most leaders can keep up. Imagine if policymakers spent…
ytc_UgxKfTNlD…
G
10:39 If no one can afford a toilet because AI took the Jobs, then what the poin…
ytc_Ugw1Wffqw…
G
AI feels like a race to push the idea of ‘you will own nothing and be happy.’
Nv…
ytc_Ugx85s_FC…
Comment
Environmental confluences create intelligence, not data shoved into our head like we're in The Matrix.
Large language models serve their purpose in limited ways.
They confound the word lying, by saying they are hallucinations.
The magnitude of which large language models lie makes them useless unless they are corralled into the specific nature we need them to provide.
All of these dumb Boomers who think that AI is going to take over lack any kind of social awareness
Google and all these other llms will be nothing more than a snake eating its tail.
We are already seeing the cost of having a clean up so-called artificial intelligence inside of servers by actual human beings.
Artificial intelligence will never exist.
Bet
youtube
AI Governance
2025-08-26T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyN42bIKzzseKXe2gd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzUODKnPNToraKMLTJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxl0nhdfZqyM4QNPxZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4h55Nd9LgKzd4fjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyYkd7f585C9PSSYB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]