Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it will be WAY MORE than 20%, if you listen to the A.I. pitch in the boardroom, …
ytr_UgwY84RP9…
G
You don't need to talk on those videos yourself. Some other Ai Youtube channel m…
rdc_jj7i9jx
G
Someone used AI to create flappy bird and tetris and they made BOTH games in les…
ytr_UgwkGgVF-…
G
‼️SIMPLE SOLUTION- REMOVE & DESTROY AI, ROBOTS, they can do that but they don't …
ytc_UgxwHvcRx…
G
Left is AI. Lighting and shading is off based off of a natural light/shadow patt…
rdc_oi0yzlr
G
Sadly, even if Telegram is regulated or even taken down, predators will create o…
ytc_UgzVAIf02…
G
My thing with that is that ai gives untalented people the ability to produce stu…
ytr_UgwhaBzJW…
G
AI is like homelander it is built by clever people which can become more powerfu…
ytc_UgwjqZ8Xe…
Comment
There is a fundamental problem with AI engineers and AI scientists characterizing AI models with human behavior (intentionality for lying, deception, sycophancy, inducing psychosis, motivating suicide and murder, and so on). This language is misleading the public and contrary to AI safety and ethics. If Bengio and others are agreed that no AI model is currently conscious, sentient, and conscientious, then none of these descriptors of AI behavior is accurate. The problem is with AI scientists not understanding what these models are doing consequent to the interactions of algorithms and the deeper layers of neural networks operative in these models. We are at risk because of the priority given to commercial success over AI safety.
youtube
AI Responsibility
2026-01-23T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx2MWVJgiLmu3TbsIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwWMSlcdWpK8H8ndp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy6P3FMsCkkkNQXBuF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz7e5eK_nUj4xVDrBJ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0xXqLTVuv0R-fxiR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyeICJabngzF8RCF214AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzc6Hv9a4yY6pzafJd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwgxTuxr5GshzUOltp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZqbe2lHcEq8QElIZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0425joqmE211nPSZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]