Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Actually, per million miles driven, driverless kill less people than people do. …
ytr_UgzTtgXSN…
G
Wealth compunds. Compound interest. Rich are buying up everything. They outpace …
ytc_Ugwg1IREA…
G
Robots don’t need sleep, health insurance, or a paycheck. They also don’t bicker…
ytc_Ugwf00x3Z…
G
AI gets alot of data from online sources, including reddit and twitter. So when …
ytr_UgyX4QH3x…
G
A theory I have about what AI might do is this: leave. Instead of destroying hum…
ytc_UgzFJ2WI0…
G
Finally we were at risk by climat change and this idea fighting is
But Ai is mo…
ytc_UgyL_5FNk…
G
Society's and cultures' problem with AI is that NO person and NO corporation and…
ytc_Ugy4e22V_…
G
Safe AI? If this thing wil develop into a nuke it's like saying nukes are safe. …
ytc_UgzIxpcg2…
Comment
If AI basic data can be influenced and enhanced be more accurate and closer to reality, the converse too should be possible. Like say, if one million FB/X users call a Donkey a Lion, this data is embedded and captured in updating AI basic data, in identifying a donkey, it will then influence the change and call a Donkey a Lion. This will prove disastrous to the world if such influences are induced by evil humans who will call say a destructive entity a harmless fire cracker and AI will pass the entity as safe with a risk factor as NIL by classifying a destructive entity as something far less in risk.
youtube
AI Governance
2025-09-24T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz75BRy43mxZOrXEu54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3HQwFppCl1j1Zha14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgweCBjWGgPoWFNaOyR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugye7H2e2tZZ8uxGzx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx2sBiRspZRqpAvUMp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwoX5GrLveiF9UZPPt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxgcE43MvydC4pYifV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw22vRnyjYiGy9ZDGp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3yjrnX4DHTwySuV14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgweAu6DIW3Xa7xP5QZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]