Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI sucks ass, can't do the work of humans. It's a tool, a very expensive tool th…
ytc_UgxxoifgO…
G
Imagine what will happen if govt can only get 50% of its previous tax revenues.…
ytc_UgxZ2hxy7…
G
Calling yourself an ai artist is like repeating the same “sick burn” about putt…
ytc_UgyeQQVG-…
G
You literally put no effort into the AI art. You just made a single prompt for a…
ytc_UgyGLEQcM…
G
To those who felt they do not have talent and was sidelined in comparison... I'm…
ytc_UgyCr0bN4…
G
Idc how bad it is right now, it keeps evolving and rapidly. This, unsupervised a…
ytc_UgwF4jBS4…
G
LLMs rely on statistical patterns and probability distributions to generate text…
ytc_UgyWZs80K…
G
The answer to this is that it all depends on the kind of autonomous robot. (non-…
ytc_UggOqp3pt…
Comment
When you see the mental capabilities of humans in action, and remember they will be creators of AI, the thought is frightening. Humans who have developed computer systems are aware of how common human errors are. When there are failures in AI, a different noun will replace the name of the error "AI"". Words like "a kink". When there is a "bug" in a computer program, the result is completely unpredictable. Failing is just one option. a program can also start making a trremendous number of mistakes in a very short period of time. AI is fine for some things like medical diagnosis. But I do not want it controlling a large scale elecctrical system. If what he says is true, AI would eventually become smarter than humans.
youtube
AI Governance
2025-08-22T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgypxIU27SLX5JOp1Kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz6XR9kqwXC6zPdKeh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzMbCRV_WAa6gWrUoR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz-nUE9gwpA18QSzOd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzl_LzTwRHtUmEAc9x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzSu78Q9yQxk62dIvV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgySPbvPSIiHRscrrNt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyjC_O-kEYow8wM3I14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdS5l2p5bdGbPS7o54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyGYraFrGHpVZytzad4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}
]