Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Their colonizing mentality never changes. Refreshing to hear some honest convers…
ytc_Ugx3K3r20…
G
The idea of adapting to AI disruption is crucial. I’ve been using Rumora to plac…
ytc_UgzFXIv2R…
G
DONT PAY THE FINE GO TO COURT ! It’s illegal and THE POLICE KNOW IT . ECHR state…
ytc_UgyMOwZ-P…
G
In case AI is perfect and able to take jobs from people. This will lead to massi…
ytc_UgzE3nM6k…
G
Ok i think china would be a really bad choice to start payments through facial r…
ytc_UgykAmHRV…
G
The biggest problem with AI is that it is so human, in the sense that it is expo…
ytc_UgzQVlbZ2…
G
"The AI will never beat human art, since they will NEVER have their emotions fil…
ytc_UgxIzGd3E…
G
If anyone is interested in the idea of what would happen when AI Weaponized itse…
ytc_Ugzaw7nKr…
Comment
A large language model can't do math, only write an answer to a mathematical question in the style of math answers in its programming database. It is a really good bullshit artist. I have serious doubts about intelligence arising from non-intelligent algorithms as an emergent property just from size and complexity.
youtube
AI Governance
2024-05-10T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy80fWGPr_YdBga7_J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGAavnN2Yr_NiYmfF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKESgYH5JdJUMCgot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzT7xg2pg5agpEMFnF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyY4o79iVVDtU9-SMN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzO0kUFr1rfbqlIdOZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1Y-kCPYvcOnbBv5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzg5vu_x56puQTUwGN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhFh9WBJbdC13v3xh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzqIdg0ajJIg8BQOhV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]